1
|
Li B, Wei H, Yan J, Wang X. A novel portable augmented reality surgical navigation system for maxillofacial surgery: technique and accuracy study. Int J Oral Maxillofac Surg 2024; 53:961-967. [PMID: 38839534 DOI: 10.1016/j.ijom.2024.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 01/21/2024] [Accepted: 02/06/2024] [Indexed: 06/07/2024]
Abstract
Surgical navigation, despite its potential benefits, faces challenges in widespread adoption in clinical practice. Possible reasons include the high cost, increased surgery time, attention shifts during surgery, and the mental task of mapping from the monitor to the patient. To address these challenges, a portable, all-in-one surgical navigation system using augmented reality (AR) was developed, and its feasibility and accuracy were investigated. The system achieves AR visualization by capturing a live video stream of the actual surgical field using a visible light camera and merging it with preoperative virtual images. A skull model with reference spheres was used to evaluate the accuracy. After registration, virtual models were overlaid on the real skull model. The discrepancies between the centres of the real spheres and the virtual model were measured to assess the AR visualization accuracy. This AR surgical navigation system demonstrated precise AR visualization, with an overall overlap error of 0.53 ± 0.21 mm. By seamlessly integrating the preoperative virtual plan with the intraoperative field of view in a single view, this novel AR navigation system could provide a feasible solution for the use of AR visualization to guide the surgeon in performing the operation as planned.
Collapse
Affiliation(s)
- B Li
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - H Wei
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - J Yan
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China
| | - X Wang
- Departments of Oral and Craniomaxillofacial Surgery, Shanghai 9th People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology, Shanghai, China; National Clinical Research Center of Stomatology, Shanghai, China.
| |
Collapse
|
2
|
Dellaretti M, Figueiredo HP, Soares AG, Froes LE, Gomes FC, Faraj F. Applications of Augmented Reality in Neuro-Oncology: A Case Series. Asian J Neurosurg 2024; 19:472-477. [PMID: 39205891 PMCID: PMC11349399 DOI: 10.1055/s-0044-1788064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Augmented reality (AR) is a technological tool that superimposes two-dimensional virtual images onto three-dimensional real-world scenarios through the integration of neuronavigation and a surgical microscope. The aim of this study was to demonstrate our initial experience with AR and to assess its application in oncological neurosurgery. This is a case series with 31 patients who underwent surgery at Santa Casa BH for the treatment of intracranial tumors in the period from March 4, 2022, to July 14, 2023. The application of AR was evaluated in each case through three parameters: whether the virtual images auxiliated in the incision and craniotomy and whether the virtual images aided in intraoperative microsurgery decisions. Of the 31 patients, 5 patients developed new neurological deficits postoperatively. One patient died, with a mortality rate of 3.0%. Complete tumor resection was achieved in 22 patients, and partial resection was achieved in 6 patients. In all patients, AR was used to guide the incision and craniotomy in each case, leading to improved and precise surgical approaches. As intraoperative microsurgery guidance, it proved to be useful in 29 cases. The application of AR seems to enhance surgical safety for both the patient and the surgeon. It allows a more refined immediate operative planning, from head positioning to skin incision and craniotomy. Additionally, it helps decision-making in the intraoperative microsurgery phase with a potentially positive impact on surgical outcomes.
Collapse
Affiliation(s)
- Marcos Dellaretti
- Department of Neurosurgery, Santa Casa BH, Belo Horizonte, Minas Gerais, Brazil
- Research Department, Santa Casa BH College, Belo Horizonte, Minas Gerais, Brazil
| | - Hian P.G. Figueiredo
- Department of Neurosurgery, Santa Casa BH, Belo Horizonte, Minas Gerais, Brazil
- Research Department, Santa Casa BH College, Belo Horizonte, Minas Gerais, Brazil
| | - André G. Soares
- Department of Neurosurgery, Santa Casa BH, Belo Horizonte, Minas Gerais, Brazil
- Research Department, Santa Casa BH College, Belo Horizonte, Minas Gerais, Brazil
| | - Luiz E.V. Froes
- Department of Neurosurgery, Santa Casa BH, Belo Horizonte, Minas Gerais, Brazil
- Research Department, Santa Casa BH College, Belo Horizonte, Minas Gerais, Brazil
| | | | - Franklin Faraj
- Department of Neurosurgery, Santa Casa BH, Belo Horizonte, Minas Gerais, Brazil
- Research Department, Santa Casa BH College, Belo Horizonte, Minas Gerais, Brazil
| |
Collapse
|
3
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
4
|
Qi Z, Bopp MHA, Nimsky C, Chen X, Xu X, Wang Q, Gan Z, Zhang S, Wang J, Jin H, Zhang J. A Novel Registration Method for a Mixed Reality Navigation System Based on a Laser Crosshair Simulator: A Technical Note. Bioengineering (Basel) 2023; 10:1290. [PMID: 38002414 PMCID: PMC10669875 DOI: 10.3390/bioengineering10111290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Mixed Reality Navigation (MRN) is pivotal in augmented reality-assisted intelligent neurosurgical interventions. However, existing MRN registration methods face challenges in concurrently achieving low user dependency, high accuracy, and clinical applicability. This study proposes and evaluates a novel registration method based on a laser crosshair simulator, evaluating its feasibility and accuracy. A novel registration method employing a laser crosshair simulator was introduced, designed to replicate the scanner frame's position on the patient. The system autonomously calculates the transformation, mapping coordinates from the tracking space to the reference image space. A mathematical model and workflow for registration were designed, and a Universal Windows Platform (UWP) application was developed on HoloLens-2. Finally, a head phantom was used to measure the system's target registration error (TRE). The proposed method was successfully implemented, obviating the need for user interactions with virtual objects during the registration process. Regarding accuracy, the average deviation was 3.7 ± 1.7 mm. This method shows encouraging results in efficiency and intuitiveness and marks a valuable advancement in low-cost, easy-to-use MRN systems. The potential for enhancing accuracy and adaptability in intervention procedures positions this approach as promising for improving surgical outcomes.
Collapse
Affiliation(s)
- Ziyu Qi
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
| | - Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Xiaolei Chen
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Xinghua Xu
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Qun Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| | - Zhichao Gan
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Shiyu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Jingyue Wang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
| | - Haitao Jin
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
- Medical School of Chinese PLA, Beijing 100853, China
- NCO School, Army Medical University, Shijiazhuang 050081, China
| | - Jiashu Zhang
- Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China; (X.C.); (X.X.); (Q.W.); (Z.G.); (S.Z.); (J.W.); (H.J.)
| |
Collapse
|
5
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
6
|
Kazemzadeh K, Akhlaghdoust M, Zali A. Advances in artificial intelligence, robotics, augmented and virtual reality in neurosurgery. Front Surg 2023; 10:1241923. [PMID: 37693641 PMCID: PMC10483402 DOI: 10.3389/fsurg.2023.1241923] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023] Open
Abstract
Neurosurgical practitioners undergo extensive and prolonged training to acquire diverse technical proficiencies, while neurosurgical procedures necessitate a substantial amount of pre-, post-, and intraoperative clinical data acquisition, making decisions, attention, and convalescence. The past decade witnessed an appreciable escalation in the significance of artificial intelligence (AI) in neurosurgery. AI holds significant potential in neurosurgery as it supplements the abilities of neurosurgeons to offer optimal interventional and non-interventional care to patients by improving prognostic and diagnostic outcomes in clinical therapy and assisting neurosurgeons in making decisions while surgical interventions to enhance patient outcomes. Other technologies including augmented reality, robotics, and virtual reality can assist and promote neurosurgical methods as well. Moreover, they play a significant role in generating, processing, as well as storing experimental and clinical data. Also, the usage of these technologies in neurosurgery is able to curtail the number of costs linked with surgical care and extend high-quality health care to a wider populace. This narrative review aims to integrate the results of articles that elucidate the role of the aforementioned technologies in neurosurgery.
Collapse
Affiliation(s)
- Kimia Kazemzadeh
- Students’ Scientific Research Center, Tehran University of Medical Sciences, Tehran, Iran
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Meisam Akhlaghdoust
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Alireza Zali
- Network of Neurosurgery and Artificial Intelligence (NONAI), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- USERN Office, Functional Neurosurgery Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
7
|
Peter-Derex L, von Ellenrieder N, van Rosmalen F, Hall J, Dubeau F, Gotman J, Frauscher B. Regional variability in intracerebral properties of NREM to REM sleep transitions in humans. Proc Natl Acad Sci U S A 2023; 120:e2300387120. [PMID: 37339200 PMCID: PMC10293806 DOI: 10.1073/pnas.2300387120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 05/12/2023] [Indexed: 06/22/2023] Open
Abstract
Transitions between wake and sleep states show a progressive pattern underpinned by local sleep regulation. In contrast, little evidence is available on non-rapid eye movement (NREM) to rapid eye movement (REM) sleep boundaries, considered as mainly reflecting subcortical regulation. Using polysomnography (PSG) combined with stereoelectroencephalography (SEEG) in humans undergoing epilepsy presurgical evaluation, we explored the dynamics of NREM-to-REM transitions. PSG was used to visually score transitions and identify REM sleep features. SEEG-based local transitions were determined automatically with a machine learning algorithm using features validated for automatic intra-cranial sleep scoring (10.5281/zenodo.7410501). We analyzed 2988 channel-transitions from 29 patients. The average transition time from all intracerebral channels to the first visually marked REM sleep epoch was 8 s ± 1 min 58 s, with a great heterogeneity between brain areas. Transitions were observed first in the lateral occipital cortex, preceding scalp transition by 1 min 57 s ± 2 min 14 s (d = -0.83), and close to the first sawtooth wave marker. Regions with late transitions were the inferior frontal and orbital gyri (1 min 1 s ± 2 min 1 s, d = 0.43, and 1 min 1 s ± 2 min 5 s, d = 0.43, after scalp transition). Intracranial transitions were earlier than scalp transitions as the night advanced (last sleep cycle, d = -0.81). We show a reproducible gradual pattern of REM sleep initiation, suggesting the involvement of cortical mechanisms of regulation. This provides clues for understanding oneiric experiences occurring at the NREM/REM boundary.
Collapse
Affiliation(s)
- Laure Peter-Derex
- Center for Sleep Medicine and Respiratory Diseases, Croix-Rousse Hospital, University Hospital of Lyon, Lyon 1 University, 69004Lyon, France
- Lyon Neuroscience Research Center, CNRS UMR5292/INSERM U1028, Lyon69000, France
| | - Nicolás von Ellenrieder
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| | - Frank van Rosmalen
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| | - Jeffery Hall
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| | - François Dubeau
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| | - Jean Gotman
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| | - Birgit Frauscher
- Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
- Analytical Neurophysiology Lab, Montreal Neurological Institute and Hospital, McGill University, Montreal, QCH3A 2B4, Canada
| |
Collapse
|
8
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
9
|
Boaro A, Moscolo F, Feletti A, Polizzi G, Nunes S, Siddi F, Broekman M, Sala F. Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN & SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
Abstract
Introduction The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations. Results The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
Collapse
Affiliation(s)
- A. Boaro
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Moscolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - A. Feletti
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - G.M.V. Polizzi
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - S. Nunes
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Siddi
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
| | - M.L.D. Broekman
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
- Department of Neurosurgery, Leiden University Medical Center, Leiden, Zuid-Holland, the Netherlands
| | - F. Sala
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| |
Collapse
|
10
|
Gueziri HE, Georgiopoulos M, Santaguida C, Collins DL. Ultrasound-based navigated pedicle screw insertion without intraoperative radiation: feasibility study on porcine cadavers. Spine J 2022; 22:1408-1417. [PMID: 35523390 DOI: 10.1016/j.spinee.2022.04.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 04/15/2022] [Accepted: 04/26/2022] [Indexed: 02/03/2023]
Abstract
BACKGROUND Navigation systems for spinal fusion surgery rely on intraoperative computed tomography (CT) or fluoroscopy imaging. Both expose patient, surgeons and operating room staff to significant amounts of radiation. Alternative methods involving intraoperative ultrasound (iUS) imaging have recently shown promise for image-to-patient registration. Yet, the feasibility and safety of iUS navigation in spinal fusion have not been demonstrated. PURPOSE To evaluate the accuracy of pedicle screw insertion in lumbar and thoracolumbar spinal fusion using a fully automated iUS navigation system. STUDY DESIGN Prospective porcine cadaver study. METHODS Five porcine cadavers were used to instrument the lumbar and thoracolumbar spine using posterior open surgery. During the procedure, iUS images were acquired and used to establish automatic registration between the anatomy and preoperative CT images. Navigation was performed with the preoperative CT using tracked instruments. The accuracy of the system was measured as the distance of manually collected points to the preoperative CT vertebral surface and compared against fiducial-based registration. A postoperative CT was acquired, and screw placements were manually verified. We report breach rates, as well as axial and sagittal screw deviations. RESULTS A total of 56 screws were inserted (5.50 mm diameter n=50, and 6.50 mm diameter n=6). Fifty-two screws were inserted safely without breach. Four screws (7.14%) presented a medial breach with an average deviation of 1.35±0.37 mm (all <2 mm). Two breaches were caused by 6.50 mm diameter screws, and two by 5.50 mm screws. For vertebrae instrumented with 5.50 mm screws, the average axial diameter of the pedicle was 9.29 mm leaving a 1.89 mm margin in the left and right pedicle. For vertebrae instrumented with 6.50 mm screws, the average axial diameter of the pedicle was 8.99 mm leaving a 1.24 mm error margin in the left and right pedicle. The average distance to the vertebral surface was 0.96 mm using iUS registration and 0.97 mm using fiducial-based registration. CONCLUSIONS We successfully implanted all pedicle screws in the thoracolumbar spine using the ultrasound-based navigation system. All breaches recorded were minor (<2 mm) and the breach rate (7.14%) was comparable to existing literature. More investigation is needed to evaluate consistency, reproducibility, and performance in surgical context. CLINICAL SIGNIFICANCE Intraoperative US-based navigation is feasible and practical for pedicle screw insertion in a porcine model. It might be used as a low-cost and radiation-free alternative to intraoperative CT and fluoroscopy in the future.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, Quebec, Canada.
| | - Miltiadis Georgiopoulos
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, Quebec, Canada
| | - Carlo Santaguida
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, Quebec, Canada
| | - D Louis Collins
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, Quebec, Canada; Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, Quebec, Canada
| |
Collapse
|
11
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
12
|
Patel MR, Jacob KC, Parsons AW, Chavez FA, Ribot MA, Munim MA, Vanjani NN, Pawlowski H, Prabhu MC, Singh K. Systematic Review: Applications of Intraoperative Ultrasound in Spinal Surgery. World Neurosurg 2022; 164:e45-e58. [PMID: 35259500 DOI: 10.1016/j.wneu.2022.02.130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 02/28/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Due to increased practicality and decreased costs and radiation, interest has risen for intraoperative ultrasound (iUS) in spinal surgery applications; however, few studies have provided a robust overview of its use in spinal surgery. We synthesize findings of existing literature on usage of iUS in navigation, pedicle screw placement, and identification of anatomy during spinal interventions. METHODS Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were utilized in this systematic review. Studies were identified through PubMed, Scopus, and Google Scholar databases using the search string. Abstracts mentioning iUS in spine applications were included. Upon full-text review, exclusion criteria were implemented, including outdated studies or those with weak topic relevance or statistical power. Upon elimination of duplicates, multi-reviewer screening for eligibility, and citation search, 44 manuscripts were analyzed. RESULTS Navigation using iUS is safe, effective, and economical. iUS registration accuracy and success is within clinically acceptable limits for image-guided navigation (Table 2). Pedicle screw instrumentation with iUS is precise with a favorable safety profile (Table 2). Anatomical landmarks are reliably identified with iUS, and surgeons are overwhelmingly successful in neural or vascular tissue identification with iUS modalities including standard B mode, doppler, and contrast-enhanced ultrasound (CE-US) (Table 3). iUS use in traumatic reduction of fractures properly identifies anatomical structures, intervertebral disc space, and vasculature (Table 3). CONCLUSION iUS eliminates radiation, decreases costs, and provides sufficient accuracy and reliability in identification of anatomical and neurovascular structures in various spinal surgery settings.
Collapse
Affiliation(s)
- Madhav R Patel
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Kevin C Jacob
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Alexander W Parsons
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Frank A Chavez
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Max A Ribot
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Mohammed A Munim
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Nisheka N Vanjani
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Hanna Pawlowski
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Michael C Prabhu
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612
| | - Kern Singh
- Department of Orthopaedic Surgery, Rush University Medical Center, 1611 W. Harrison St. Suite #300, Chicago, IL, 60612.
| |
Collapse
|
13
|
Trends in the Use of Augmented Reality, Virtual Reality, and Mixed Reality in Surgical Research: a Global Bibliometric and Visualized Analysis. Indian J Surg 2022; 84:52-69. [PMID: 35228782 PMCID: PMC8866921 DOI: 10.1007/s12262-021-03243-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 12/11/2021] [Indexed: 11/15/2022] Open
Abstract
There have been many major developments in the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR) technologies in the context of global surgical research, yet few reports on the trends in this field have been published to date. This study was therefore designed to explore these worldwide trends in this clinically important field. Relevant studies published from 1 January 2009 through 13 October 2020 were retrieved from the Science Citation Index-Expanded (SCI-E) tool of the Web of Science database. Bibliometric techniques were then used to analyze the resultant data, with visual bibliographic coupling, co-authorship, co-citation, co-occurrence, and publication trend analyses subsequently being conducted with GraphPad Prism 8 and with the visualization of similarities (VOS) software tool. There is no patient and public involved. In total, 6221 relevant studies were incorporated into this analysis. At a high level, clear global annual increases in the number of publications in this field were observed. The USA made the greatest contributions to this field over the studied period, with the highest H-index value, the most citations, and the greatest total link strength for analyzed publications. The country with the highest number of average citations per publication was Scotland. The Surgical Endoscopy And Other Interventional Techniques journal contributed the greatest number of publications in this field. The University of London was the institution that produced the greatest volume of research in this field. Overall, studies could be broadly classified into five clusters: Neurological Research, Surgical Techniques, Technological Products, Rehabilitative Medicine, and Clinical Therapy. The trends detected in the present analysis suggest that the number of global publications pertaining to the use of AR, VR, and MR techniques in surgical research is likely to increase in the coming years. Particular attention should be paid to emerging trends in related fields including MR, extended reality, head-mounted displays, navigation, and holographic images.
Collapse
|
14
|
Aguilar-Salinas P, Gutierrez-Aguirre SF, Avila MJ, Nakaji P. Current status of augmented reality in cerebrovascular surgery: a systematic review. Neurosurg Rev 2022; 45:1951-1964. [PMID: 35149900 DOI: 10.1007/s10143-022-01733-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 12/01/2021] [Accepted: 01/05/2022] [Indexed: 12/29/2022]
Abstract
Augmented reality (AR) is an adjuvant tool in neuronavigation to improve spatial and anatomic understanding. The present review aims to describe the current status of intraoperative AR for the treatment of cerebrovascular pathology. A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The following databases were searched: PubMed, Science Direct, Web of Science, and EMBASE up to December, 2020. The search strategy consisted of "augmented reality," "AR," "cerebrovascular," "navigation," "neurovascular," "neurosurgery," and "endovascular" in both AND and OR combinations. Studies included were original research articles with intraoperative application. The manuscripts were thoroughly examined for study design, outcomes, and results. Sixteen studies were identified describing the use of intraoperative AR in the treatment of cerebrovascular pathology. A total of 172 patients were treated for 190 cerebrovascular lesions using intraoperative AR. The most common treated pathology was intracranial aneurysms. Most studies were cases and there was only a case-control study. A head-up display system in the microscope was the most common AR display. AR was found to be useful for tailoring the craniotomy, dura opening, and proper identification of donor and recipient vessels in vascular bypass. Most AR systems were unable to account for tissue deformation. This systematic review suggests that intraoperative AR is becoming a promising and feasible adjunct in the treatment of cerebrovascular pathology. It has been found to be a useful tool in the preoperative planning and intraoperative guidance. However, its clinical benefits remain to be seen.
Collapse
Affiliation(s)
- Pedro Aguilar-Salinas
- Department of Neurosurgery, Banner University Medical Center, University of Arizona, Tucson, AZ, USA
| | | | - Mauricio J Avila
- Department of Neurosurgery, Banner University Medical Center, University of Arizona, Tucson, AZ, USA
| | - Peter Nakaji
- Department of Neurosurgery, Banner University Medical Center, University of Arizona, 755 E. McDowell Rd, Phoenix, AZ, 85006, USA.
| |
Collapse
|
15
|
Mishra R, Narayanan MK, Umana GE, Montemurro N, Chaurasia B, Deora H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031719. [PMID: 35162742 PMCID: PMC8835688 DOI: 10.3390/ijerph19031719] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023]
Abstract
Background: While several publications have focused on the intuitive role of augmented reality (AR) and virtual reality (VR) in neurosurgical planning, the aim of this review was to explore other avenues, where these technologies have significant utility and applicability. Methods: This review was conducted by searching PubMed, PubMed Central, Google Scholar, the Scopus database, the Web of Science Core Collection database, and the SciELO citation index, from 1989–2021. An example of a search strategy used in PubMed Central is: “Virtual reality” [All Fields] AND (“neurosurgical procedures” [MeSH Terms] OR (“neurosurgical” [All Fields] AND “procedures” [All Fields]) OR “neurosurgical procedures” [All Fields] OR “neurosurgery” [All Fields] OR “neurosurgery” [MeSH Terms]). Using this search strategy, we identified 487 (PubMed), 1097 (PubMed Central), and 275 citations (Web of Science Core Collection database). Results: Articles were found and reviewed showing numerous applications of VR/AR in neurosurgery. These applications included their utility as a supplement and augment for neuronavigation in the fields of diagnosis for complex vascular interventions, spine deformity correction, resident training, procedural practice, pain management, and rehabilitation of neurosurgical patients. These technologies have also shown promise in other area of neurosurgery, such as consent taking, training of ancillary personnel, and improving patient comfort during procedures, as well as a tool for training neurosurgeons in other advancements in the field, such as robotic neurosurgery. Conclusions: We present the first review of the immense possibilities of VR in neurosurgery, beyond merely planning for surgical procedures. The importance of VR and AR, especially in “social distancing” in neurosurgery training, for economically disadvantaged sections, for prevention of medicolegal claims and in pain management and rehabilitation, is promising and warrants further research.
Collapse
Affiliation(s)
- Rakesh Mishra
- Department of Neurosurgery, Institute of Medical Sciences, Banaras Hindu University, Varanasi 221005, India;
| | | | - Giuseppe E. Umana
- Trauma and Gamma-Knife Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
- Correspondence:
| | - Bipin Chaurasia
- Department of Neurosurgery, Bhawani Hospital, Birgunj 44300, Nepal;
| | - Harsh Deora
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru 560029, India;
| |
Collapse
|
16
|
From Patient to Musician: A Multi-Sensory Virtual Reality Rehabilitation Tool for Spatial Neglect. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Unilateral Spatial Neglect (USN) commonly results from a stroke or acquired brain injury. USN affects multiple modalities and results in failure to respond to stimuli on the contralesional side of space. Although USN is a heterogeneous syndrome, present-day therapy methods often fail to consider multiple modalities. Musical Neglect Therapy (MNT) is a therapy method that succeeds in incorporating multiple modalities by asking patients to make music. This research aimed to exploit the immersive and modifiable aspect of VR to translate MNT to a VR therapy tool. The tool was evaluated in a 2-week pilot study with four clinical users. These results are compared to a control group of four non-clinical users. Results indicated that patients responded to triggers in their entire environment and performance results could be clearly differentiated between clinical and non-clinical users. Moreover, patients increasingly corrected their head direction towards their neglected side. Patients stated that the use of VR increased their enjoyment of the therapy. This study contributes to the current research on rehabilitation for USN by proposing the first system to apply MNT in a VR environment. The tool shows promise as an addition to currently used rehabilitation methods. However, results are limited to a small sample size and performance metrics. Future work will focus on validating these results with a larger sample over a longer period. Moreover, future efforts should explore personalisation and gamification to tailor to the heterogeneity of the condition.
Collapse
|
17
|
Klimes P, Peter-Derex L, Hall J, Dubeau F, Frauscher B. Spatio-temporal spike dynamics predict surgical outcome in adult focal epilepsy. Clin Neurophysiol 2021; 134:88-99. [PMID: 34991017 DOI: 10.1016/j.clinph.2021.10.023] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 10/27/2021] [Accepted: 10/29/2021] [Indexed: 01/05/2023]
Abstract
OBJECTIVE We hypothesized that spatio-temporal dynamics of interictal spikes reflect the extent and stability of epileptic sources and determine surgical outcome. METHODS We studied 30 consecutive patients (14 good outcome). Spikes were detected in prolonged stereo-electroencephalography recordings. We quantified the spatio-temporal dynamics of spikes using the variance of the spike rate, line length and skewness of the spike distribution, and related these features to outcome. We built a logistic regression model, and compared its performance to traditional markers. RESULTS Good outcome patients had more dominant and stable sources than poor outcome patients as expressed by a higher variance of spike rates, a lower variance of line length, and a lower variance of positive skewness (ps < 0.05). The outcome was correctly predicted in 80% of patients. This was better or non-inferior to predictions based on a focal lesion (p = 0.016), focal seizure-onset zone, or complete resection (ps > 0.05). In the five patients where traditional markers failed, spike distribution predicted the outcome correctly. The best results were achieved by 18-h periods or longer. CONCLUSIONS Analysis of spike dynamics shows that surgery outcome depends on strong, single and stable sources. SIGNIFICANCE Our quantitative method has the potential to be a reliable predictor of surgical outcome.
Collapse
Affiliation(s)
- Petr Klimes
- Analytical Neurophysiology Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada; Institute of Scientific Instruments, The Czech Academy of Sciences, Brno, Czech Republic.
| | - Laure Peter-Derex
- Analytical Neurophysiology Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada; Center for Sleep Medicine and Respiratory Diseases, Lyon University Hospital, Lyon 1 University, Lyon, France; Lyon Neuroscience Research Center, Lyon, France
| | - Jeff Hall
- Montreal Neurological Hospital, McGill University, Montreal, Quebec, Canada
| | - François Dubeau
- Montreal Neurological Hospital, McGill University, Montreal, Quebec, Canada
| | - Birgit Frauscher
- Analytical Neurophysiology Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
18
|
Uddin SA, Hanna G, Ross L, Molina C, Urakov T, Johnson P, Kim T, Drazin D. Augmented Reality in Spinal Surgery: Highlights From Augmented Reality Lectures at the Emerging Technologies Annual Meetings. Cureus 2021; 13:e19165. [PMID: 34873508 PMCID: PMC8631483 DOI: 10.7759/cureus.19165] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/31/2021] [Indexed: 12/26/2022] Open
Abstract
Introduction Augmented reality (AR) is an advanced technology and emerging field that has been adopted into spine surgery to enhance care and outcomes. AR superimposes a three-dimensional computer-generated image over the normal anatomy of interest in order to facilitate visualization of deep structures without the ability to directly see them. Objective To summarize the latest literature and highlight AR from the annual “Spinal Navigation, Emerging Technologies and Systems Integration” meeting lectures presented by the Seattle Science Foundation (SSF) on the development and use of augmented reality in spinal surgery. Methods We performed a comprehensive literature review from 2016 to 2020 on PubMed to correlate with lectures given at the annual “Emerging Technologies” conferences. After the exclusion of papers that concerned non-spine surgery specialties, a total of 54 papers concerning AR in spinal applications were found. The articles were then categorized by content and focus. Results The 54 papers were divided into six major focused topics: training, proof of concept, feasibility and usability, clinical evaluation, state of technology, and nonsurgical applications. The greatest number of papers were published during 2020. Each paper discussed varied topics such as patient rehabilitation, proof of concept, workflow, applications in neurological and orthopedic spine surgery, and outcomes data. Conclusions The recent literature and SSF lectures on AR provide a solid base and demonstrate the emergence of an advanced technology that offers a platform for an advantageous technique that is superior, in that it allows the operating surgeon to focus directly on the patient rather than a guidance screen.
Collapse
Affiliation(s)
| | - George Hanna
- Neurosurgery, Cedars-Sinai Spine Center, Los Angeles, USA
| | - Lindsey Ross
- Neurology and Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, USA
| | - Camilo Molina
- Neurological Surgery, Washington University School of Medicine, St. Louis, USA
| | - Timur Urakov
- Neurological Surgery, University of Miami, Miami, USA
| | - Patrick Johnson
- Neurological Surgery, Cedars-Sinai Medical Center, Los Angeles, USA
| | - Terrence Kim
- Orthopedic Surgery, Cedars-Sinai Medical Center, Los Angeles, USA
| | - Doniel Drazin
- Medicine, Pacific Northwest University of Health Sciences, Yakima, USA
| |
Collapse
|
19
|
Examining the benefits of extended reality in neurosurgery: A systematic review. J Clin Neurosci 2021; 94:41-53. [PMID: 34863461 DOI: 10.1016/j.jocn.2021.09.037] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/18/2021] [Accepted: 09/25/2021] [Indexed: 01/14/2023]
Abstract
While well-established in other surgical subspecialties, the benefits of extended reality, consisting of virtual reality (VR), augmented reality (AR), and mixed reality (MR) technologies, remains underexplored in neurosurgery despite its increasing utilization. To address this gap, we conducted a systematic review of the effects of extended reality (XR) in neurosurgery with an emphasis on the perioperative period, to provide a guide for future clinical optimization. Seven primary electronic databases were screened following guidelines outlined by PRISMA and the Institute of Medicine. Reported data related to outcomes in the perioperative period and resident training were all examined, and a focused analysis of studies reporting controlled, clinical outcomes was completed. After removal of duplicates, 2548 studies were screened with 116 studies reporting measurable effects of XR in neurosurgery. The majority (82%) included cranial based applications related to tumor surgery with 34% showing improved resection rates and functional outcomes. A rise in high-quality studies was identified from 2017 to 2020 compared to all previous years (p = 0.004). Primary users of the technology were: 56% neurosurgeon (n = 65), 28% residents (n = 33) and 5% patients (n = 6). A final synthesis was conducted on 10 controlled studies reporting patient outcomes. XR technologies have demonstrated benefits in preoperative planning and multimodal neuronavigation especially for tumor surgery. However, few studies have reported patient outcomes in a controlled design demonstrating a need for higher quality data. XR platforms offer several advantages to improve patient outcomes and specifically, the patient experience for neurosurgery.
Collapse
|
20
|
Khandelwal P, Collins DL, Siddiqi K. Spine and Individual Vertebrae Segmentation in Computed Tomography Images Using Geometric Flows and Shape Priors. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.592296] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The surgical treatment of injuries to the spine often requires the placement of pedicle screws. To prevent damage to nearby blood vessels and nerves, the individual vertebrae and their surrounding tissue must be precisely localized. To aid surgical planning in this context we present a clinically applicable geometric flow based method to segment the human spinal column from computed tomography (CT) scans. We first apply anisotropic diffusion and flux computation to mitigate the effects of region inhomogeneities and partial volume effects at vertebral boundaries in such data. The first pipeline of our segmentation approach uses a region-based geometric flow, requires only a single manually identified seed point to initiate, and runs efficiently on a multi-core central processing unit (CPU). A shape-prior formulation is employed in a separate second pipeline to segment individual vertebrae, using both region and boundary based terms to augment the initial segmentation. We validate our method on four different clinical databases, each of which has a distinct intensity distribution. Our approach obviates the need for manual segmentation, significantly reduces inter- and intra-observer differences, runs in times compatible with use in a clinical workflow, achieves Dice scores that are comparable to the state of the art, and yields precise vertebral surfaces that are well within the acceptable 2 mm mark for surgical interventions.
Collapse
|
21
|
Higueras-Esteban A, Delgado-Martínez I, Serrano L, Principe A, Pérez Enriquez C, González Ballester MÁ, Rocamora R, Conesa G, Serra L. SYLVIUS: A multimodal and multidisciplinary platform for epilepsy surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106042. [PMID: 33743489 DOI: 10.1016/j.cmpb.2021.106042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/03/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE We present SYLVIUS, a software platform intended to facilitate and improve the complex workflow required to diagnose and surgically treat drug-resistant epilepsies. In complex epilepsies, additional invasive information from exploration with stereoencephalography (SEEG) with deep electrodes may be needed, for which the input from different diagnostic methods and clinicians from several specialties is required to ensure diagnostic efficacy and surgical safety. We aim to provide a software platform with optimal data flow among the different stages of epilepsy surgery to provide smooth and integrated decision making. METHODS The SYLVIUS platform provides a clinical workflow designed to ensure seamless and safe patient data sharing across specialities. It integrates tools for stereo visualization, data registration, transfer of electrode plans referred to distinct datasets, automated postoperative contact segmentation, and novel DWI tractography analysis. Nineteen cases were retrospectively evaluated to track modifications from an initial plan to obtain a final surgical plan, using SYLVIUS. RESULTS The software was used to modify trajectories in all 19 consulted cases, which were then imported into the robotic system for the surgical intervention. When available, SYLVIUS provided extra multimodal information, which resulted in a greater number of trajectory modifications. CONCLUSIONS The architecture presented in this paper streamlines epilepsy surgery allowing clinicians to have a digital clinical tool that allows recording of the different stages of the procedure, in a common multimodal 2D/3D setting for participation of different clinicians in defining and validating surgical plans for SEEG cases.
Collapse
Affiliation(s)
- Alfredo Higueras-Esteban
- Galgo Medical SL, Neurosurgery Dept, Barcelona, Spain; Universitat Pompeu Fabra, BCN Medtech, Dept. of Information and Communication Technologies, Barcelona, Spain.
| | | | - Laura Serrano
- IMIM-Hospital del Mar, Neurosurgery, Barcelona, Spain
| | | | | | - Miguel Ángel González Ballester
- Universitat Pompeu Fabra, BCN Medtech, Dept. of Information and Communication Technologies, Barcelona, Spain; ICREA, Barcelona, Spain
| | | | | | - Luis Serra
- Galgo Medical SL, Neurosurgery Dept, Barcelona, Spain
| |
Collapse
|
22
|
Fernandes de Oliveira Santos B, de Araujo Paz D, Fernandes VM, Dos Santos JC, Chaddad-Neto FEA, Sousa ACS, Oliveira JLM. Minimally invasive supratentorial neurosurgical approaches guided by Smartphone app and compass. Sci Rep 2021; 11:6778. [PMID: 33762597 PMCID: PMC7991647 DOI: 10.1038/s41598-021-85472-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 03/02/2021] [Indexed: 01/19/2023] Open
Abstract
The precise location in the scalp of specifically planned points can help to achieve less invasive approaches. This study aims to develop a smartphone app, evaluate the precision and accuracy of the developed tool, and describe a series of cases using the referred technique. The application was developed with the React Native framework for Android and iOS. A phantom was printed based on the patient's CT scan, which was used for the calculation of accuracy and precision of the method. The points of interest were marked with an "x" on the patient's head, with the aid of the app and a compass attached to a skin marker pen. Then, two experienced neurosurgeons checked the plausibility of the demarcations based on the anatomical references. Both evaluators marked the frontal, temporal and parietal targets with a difference of less than 5 mm from the corresponding intended point, in all cases. The overall average accuracy observed was 1.6 ± 1.0 mm. The app was used in the surgical planning of trepanations for ventriculoperitoneal (VP) shunts and for drainage of abscesses, and in the definition of craniotomies for meningiomas, gliomas, brain metastases, intracranial hematomas, cavernomas, and arteriovenous malformation. The sample consisted of 88 volunteers who exhibited the following pathologies: 41 (46.6%) had brain tumors, 17 (19.3%) had traumatic brain injuries, 16 (18.2%) had spontaneous intracerebral hemorrhages, 2 (2.3%) had cavernomas, 1 (1.1%) had arteriovenous malformation (AVM), 4 (4.5%) had brain abscesses, and 7 (7.9%) had a VP shunt placement. In cases approached by craniotomy, with the exception of AVM, straight incisions and minicraniotomy were performed. Surgical planning with the aid of the NeuroKeypoint app is feasible and reliable. It has enabled neurological surgeries by craniotomy and trepanation in an accurate, precise, and less invasive manner.
Collapse
Affiliation(s)
- Bruno Fernandes de Oliveira Santos
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil. .,Unimed Sergipe Hospital, Aracaju, SE, Brazil. .,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil. .,Department of Neurosurgery, Hospital de Cirurgia, Aracaju, SE, Brazil.
| | - Daniel de Araujo Paz
- Department of Neurology and Neurosurgery, Universidade Federal de São Paulo, São Paulo, SP, Brazil
| | | | | | | | - Antonio Carlos Sobral Sousa
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil.,Department of Internal Medicine, Federal University of Sergipe, Aracaju, SE, Brazil.,Division of Cardiology, University Hospital, Federal University of Sergipe, Aracaju, SE, Brazil.,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil
| | - Joselina Luzia Menezes Oliveira
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil.,Department of Internal Medicine, Federal University of Sergipe, Aracaju, SE, Brazil.,Division of Cardiology, University Hospital, Federal University of Sergipe, Aracaju, SE, Brazil.,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil
| |
Collapse
|
23
|
Gueziri HE, Rabau O, Santaguida C, Collins DL. Evaluation of an Ultrasound-Based Navigation System for Spine Neurosurgery: A Porcine Cadaver Study. Front Oncol 2021; 11:619204. [PMID: 33763355 PMCID: PMC7982867 DOI: 10.3389/fonc.2021.619204] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 01/18/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND With the growing incidence of patients receiving surgical treatment for spinal metastatic tumours, there is a need for developing cost-efficient and radiation-free alternatives for spinal interventions. In this paper, we evaluate the capabilities and limitations of an image-guided neurosurgery (IGNS) system that uses intraoperative ultrasound (iUS) imaging for guidance. METHODS Using a lumbosacral section of a porcine cadaver, we explored the impact of CT image resolution, ultrasound depth and ultrasound frequency on system accuracy, robustness and effectiveness. Preoperative CT images with an isotropic resolution of , and were acquired. During surgery, vertebrae L1 to L6 were exposed. For each vertebra, five iUS scans were acquired using two depth parameters (5 cm and 7 cm) and two frequencies (6 MHz and 12 MHz). A total of 120 acquisition trials were evaluated. Ultrasound-based registration performance is compared to the standard alignment procedure using intraoperative CT. We report target registration error (TRE) and computation time. In addition, the scans' trajectories were analyzed to identify vertebral regions that provide the most relevant features for the alignment. RESULTS For all acquisitions, the median TRE ranged from 1.42 mm to 1.58 mm and the overall computation time was 9.04 s ± 1.58 s. Fourteen out of 120 iUS acquisitions (11.66%) yielded a level-to-level mismatch (and these are included in the accuracy measurements reported). No significant effect on accuracy was found with CT resolution (F (2,10) = 1.70, p = 0.232), depth (F (1,5) = 0.22, p= 0.659) nor frequency (F (1,5) = 1.02, p = 0.359). While misalignment increases linearly with the distance from the imaged vertebra, accuracy was satisfactory for directly adjacent levels. A significant relationship was found between iUS scan coverage of laminae and articular processes, and accuracy. CONCLUSION Intraoperative ultrasound can be used for spine surgery neuronavigation. We demonstrated that the IGNS system yield acceptable accuracy and high efficiency compared to the standard CT-based navigation procedure. The flexibility of the iUS acquisitions can have repercussions on the system performance, which are not fully identified. Further investigation is needed to understand the relationship between iUS acquisition and alignment performance.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Oded Rabau
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - Carlo Santaguida
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - D. Louis Collins
- McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| |
Collapse
|
24
|
Cerebral Anatomy Detection and Surgical Planning in Patients with Anterior Skull Base Meningiomas Using a Virtual Reality Technique. J Clin Med 2021; 10:jcm10040681. [PMID: 33578799 PMCID: PMC7916569 DOI: 10.3390/jcm10040681] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 01/20/2021] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Anterior skull base meningiomas represent a wide cohort of tumors with different locations, extensions, configurations, and anatomical relationships. Diagnosis of these tumors and review of their therapies are inseparably connected with cranial imaging. We analyzed the influence of three-dimensional-virtual reality (3D-VR) reconstructions versus conventional computed tomography (CT) and magnetic resonance imaging (MRI) images (two-dimensional (2D) and screen 3D) on the identification of anatomical structures and on the surgical planning in patients with anterior skull base meningiomas. Medical files were retrospectively analyzed regarding patient- and disease-related data. Preoperative 2D-CT and 2D-MRI scans were retrospectively reconstructed to 3D-VR images and visualized via VR software to detect the characteristics of tumors. A questionnaire of experienced neurosurgeons evaluated the influence of the VR visualization technique on identification of tumor morphology and relevant anatomy and on surgical strategy. Thirty patients were included and 600 answer sheets were evaluated. The 3D-VR modality significantly influenced the detection of tumor-related anatomical structures (p = 0.002), recommended head positioning (p = 0.005), and surgical approach (p = 0.03). Therefore, the reconstruction of conventional preoperative 2D scans into 3D images and the spatial and anatomical presentation in VR models enabled greater understanding of anatomy and pathology, and thus influenced operation planning and strategy.
Collapse
|
25
|
Gerard IJ, Kersten-Oertel M, Hall JA, Sirhan D, Collins DL. Brain Shift in Neuronavigation of Brain Tumors: An Updated Review of Intra-Operative Ultrasound Applications. Front Oncol 2021; 10:618837. [PMID: 33628733 PMCID: PMC7897668 DOI: 10.3389/fonc.2020.618837] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 12/22/2020] [Indexed: 11/25/2022] Open
Abstract
Neuronavigation using pre-operative imaging data for neurosurgical guidance is a ubiquitous tool for the planning and resection of oncologic brain disease. These systems are rendered unreliable when brain shift invalidates the patient-image registration. Our previous review in 2015, Brain shift in neuronavigation of brain tumours: A review offered a new taxonomy, classification system, and a historical perspective on the causes, measurement, and pre- and intra-operative compensation of this phenomenon. Here we present an updated review using the same taxonomy and framework, focused on the developments of intra-operative ultrasound-based brain shift research from 2015 to the present (2020). The review was performed using PubMed to identify articles since 2015 with the specific words and phrases: “Brain shift” AND “Ultrasound”. Since 2015, the rate of publication of intra-operative ultrasound based articles in the context of brain shift has increased from 2–3 per year to 8–10 per year. This efficient and low-cost technology and increasing comfort among clinicians and researchers have allowed unique avenues of development. Since 2015, there has been a trend towards more mathematical advancements in the field which is often validated on publicly available datasets from early intra-operative ultrasound research, and may not give a just representation to the intra-operative imaging landscape in modern image-guided neurosurgery. Focus on vessel-based registration and virtual and augmented reality paradigms have seen traction, offering new perspectives to overcome some of the different pitfalls of ultrasound based technologies. Unfortunately, clinical adaptation and evaluation has not seen as significant of a publication boost. Brain shift continues to be a highly prevalent pitfall in maintaining accuracy throughout oncologic neurosurgical intervention and continues to be an area of active research. Intra-operative ultrasound continues to show promise as an effective, efficient, and low-cost solution for intra-operative accuracy management. A major drawback of the current research landscape is that mathematical tool validation based on retrospective data outpaces prospective clinical evaluations decreasing the strength of the evidence. The need for newer and more publicly available clinical datasets will be instrumental in more reliable validation of these methods that reflect the modern intra-operative imaging in these procedures.
Collapse
Affiliation(s)
- Ian J Gerard
- Department of Radiation Oncology, McGill University Health Centre, Montreal, QC, Canada
| | | | - Jeffery A Hall
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - Denis Sirhan
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| | - D Louis Collins
- Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
26
|
Zawy Alsofy S, Nakamura M, Ewelt C, Kafchitsas K, Lewitz M, Schipmann S, Suero Molina E, Santacroce A, Stroop R. Retrospective Comparison of Minimally Invasive and Open Monosegmental Lumbar Fusion, and Impact of Virtual Reality on Surgical Planning and Strategy. J Neurol Surg A Cent Eur Neurosurg 2021; 82:399-409. [PMID: 33540454 DOI: 10.1055/s-0040-1719099] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND AND STUDY AIMS Spinal fusion for symptomatic lumbar spondylolisthesis can be accomplished using an open or minimally invasive surgical (MIS) technique. Evaluation of segmental spondylolisthesis and instabilities and review of their therapies are inseparably connected with lumbar tomographic imaging. We analyzed a cohort of patients who underwent MIS or open monosegmental dorsal fusion and compared surgical outcomes along with complication rates. We furthermore evaluated the influence of virtual reality (VR) visualization on surgical planning in lumbar fusion. MATERIAL AND METHODS Patient files were retrospectively analyzed regarding patient- and disease-related data, operative performance, surgical outcomes, and perioperative surgical complications. Preoperative computed tomography (CT) and magnetic resonance imaging (MRI) scans were retrospectively visualized via VR software. A questionnaire evaluated the influence of three-dimensional (3D) VR images versus two-dimensional CT and MRI scans on therapy planning, fusion method, and surgical technique and procedure. RESULTS Overall, 171 patients were included (MIS/open: 90/81). MIS was associated with less blood loss, shorter surgery time and hospital stay, lower complication rates, equivalent long-term patient-reported outcomes, but lower fusion rates and higher late reoperation rates than open surgery. Image presentation using VR significantly influenced the recommended surgical therapies (decompression only/decompression and fusion; p = 0.02), had no significant influence on the recommended fusion method (rigid/dynamic/stand-alone; p = 0.77), and, in cases of rigid fusion, a significant influence on the recommended technique (MIS/open; p = 0.03) and fusion procedure (p = 0.02). CONCLUSION In patients with monosegmental degenerative or isthmic spondylolisthesis, MIS fusion was advantageous concerning perioperative complication rates and perioperative surgical outcomes, but disadvantageous regarding fusion and reoperation rates compared to open fusion. 3D-VR-based analysis of sectional images significantly influenced the recommended surgical planning.
Collapse
Affiliation(s)
- Samer Zawy Alsofy
- Department of Medicine, Faculty of Health, Witten/Herdecke University, Witten, Nordrhein-Westfalen, Germany.,Department of Neurosurgery, Saint Barbara-Hospital Hamm-Heessen, Academic Hospital of Westfälische Wilhelms-University Münster, Hamm, Nordrhein-Westfalen, Germany
| | - Makoto Nakamura
- Department of Neurosurgery, Academic Hospital Köln-Merheim, Witten/Herdecke University, Köln, Nordrhein-Westfalen, Germany
| | - Christian Ewelt
- Department of Neurosurgery, Saint Barbara-Hospital Hamm-Heessen, Academic Hospital of Westfälische Wilhelms-University Münster, Hamm, Nordrhein-Westfalen, Germany
| | - Konstantinos Kafchitsas
- Department of Spine Surgery, Asklepios Orthopedic Hospital Lindenlohe, Schwandorf, Bayern, Germany
| | - Marc Lewitz
- Department of Neurosurgery, Saint Barbara-Hospital Hamm-Heessen, Academic Hospital of Westfälische Wilhelms-University Münster, Hamm, Nordrhein-Westfalen, Germany
| | - Stephanie Schipmann
- Department of Neurosurgery, University Hospital Münster, Münster, Nordrhein-Westfalen, Germany
| | - Eric Suero Molina
- Department of Neurosurgery, University Hospital Münster, Münster, Nordrhein-Westfalen, Germany
| | - Antonio Santacroce
- Department of Neurosurgery, Saint Barbara-Hospital Hamm-Heessen, Academic Hospital of Westfälische Wilhelms-University Münster, Hamm, Nordrhein-Westfalen, Germany.,Department of Neurosurgery, Eberhard Karls University, Tübingen, Baden-Württemberg, Germany
| | - Ralf Stroop
- Department of Medicine, Faculty of Health, Witten/Herdecke University, Witten, Nordrhein-Westfalen, Germany
| |
Collapse
|
27
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
28
|
Fu Z, Jin Z, Zhang C, He Z, Zha Z, Hu C, Gan T, Yan Q, Wang P, Ye X. The Future of Endoscopic Navigation: A Review of Advanced Endoscopic Vision Technology. IEEE ACCESS 2021; 9:41144-41167. [DOI: 10.1109/access.2021.3065104] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
29
|
Ma L, Fei B. Comprehensive review of surgical microscopes: technology development and medical applications. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200292VRR. [PMID: 33398948 PMCID: PMC7780882 DOI: 10.1117/1.jbo.26.1.010901] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/04/2020] [Indexed: 05/06/2023]
Abstract
SIGNIFICANCE Surgical microscopes provide adjustable magnification, bright illumination, and clear visualization of the surgical field and have been increasingly used in operating rooms. State-of-the-art surgical microscopes are integrated with various imaging modalities, such as optical coherence tomography (OCT), fluorescence imaging, and augmented reality (AR) for image-guided surgery. AIM This comprehensive review is based on the literature of over 500 papers that cover the technology development and applications of surgical microscopy over the past century. The aim of this review is threefold: (i) providing a comprehensive technical overview of surgical microscopes, (ii) providing critical references for microscope selection and system development, and (iii) providing an overview of various medical applications. APPROACH More than 500 references were collected and reviewed. A timeline of important milestones during the evolution of surgical microscope is provided in this study. An in-depth technical overview of the optical system, mechanical system, illumination, visualization, and integration with advanced imaging modalities is provided. Various medical applications of surgical microscopes in neurosurgery and spine surgery, ophthalmic surgery, ear-nose-throat (ENT) surgery, endodontics, and plastic and reconstructive surgery are described. RESULTS Surgical microscopy has been significantly advanced in the technical aspects of high-end optics, bright and shadow-free illumination, stable and flexible mechanical design, and versatile visualization. New imaging modalities, such as hyperspectral imaging, OCT, fluorescence imaging, photoacoustic microscopy, and laser speckle contrast imaging, are being integrated with surgical microscopes. Advanced visualization and AR are being added to surgical microscopes as new features that are changing clinical practices in the operating room. CONCLUSIONS The combination of new imaging technologies and surgical microscopy will enable surgeons to perform challenging procedures and improve surgical outcomes. With advanced visualization and improved ergonomics, the surgical microscope has become a powerful tool in neurosurgery, spinal, ENT, ophthalmic, plastic and reconstructive surgeries.
Collapse
Affiliation(s)
- Ling Ma
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
30
|
Zawy Alsofy S, Sakellaropoulou I, Nakamura M, Ewelt C, Salma A, Lewitz M, Welzel Saravia H, Sarkis HM, Fortmann T, Stroop R. Impact of Virtual Reality in Arterial Anatomy Detection and Surgical Planning in Patients with Unruptured Anterior Communicating Artery Aneurysms. Brain Sci 2020; 10:brainsci10120963. [PMID: 33321880 PMCID: PMC7763342 DOI: 10.3390/brainsci10120963] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/30/2020] [Accepted: 12/08/2020] [Indexed: 01/20/2023] Open
Abstract
Anterior-communicating artery (ACoA) aneurysms have diverse configurations and anatomical variations. The evaluation and operative treatment of these aneurysms necessitates a perfect surgical strategy based on review of three-dimensional (3D) angioarchitecture using several radiologic imaging methods. We analyzed the influence of 3D virtual reality (VR) reconstructions versus conventional computed tomography angiography (CTA) scans on the identification of vascular anatomy and on surgical planning in patients with unruptured ACoA aneurysms. Medical files were retrospectively analyzed regarding patient- and disease-related data. Preoperative CTA scans were retrospectively reconstructed to 3D-VR images and visualized via VR software to detect the characteristics of unruptured ACoA aneurysms. A questionnaire was used to evaluate the influence of VR on the identification of aneurysm morphology and relevant arterial anatomy and on surgical strategy. Twenty-six patients were included and 520 answer sheets were evaluated. The 3D-VR modality significantly influenced detection of the aneurysm-related vascular structure (p = 0.0001), the recommended head positioning (p = 0.005), and the surgical approach (p = 0.001) in the planning of microsurgical clipping. Thus, reconstruction of conventional preoperative CTA scans into 3D images and the spatial presentation in VR models enabled greater understanding of the anatomy and pathology, provided realistic haptic feedback for aneurysm surgery, and influenced operation planning and strategy.
Collapse
Affiliation(s)
- Samer Zawy Alsofy
- Department of Medicine, Faculty of Health, Witten/Herdecke University, 58448 Witten, Germany;
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
- Correspondence:
| | - Ioanna Sakellaropoulou
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Makoto Nakamura
- Department of Neurosurgery, Academic Hospital Köln-Merheim, Witten/Herdecke University, 51109 Köln, Germany;
| | - Christian Ewelt
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Asem Salma
- Department of Neurosurgery, St. Rita’s Neuroscience Institute, Lima, OH 45801, USA;
| | - Marc Lewitz
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Heinz Welzel Saravia
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Hraq Mourad Sarkis
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Thomas Fortmann
- Department of Neurosurgery, St. Barbara-Hospital, Academic Hospital of Westfälische Wilhelms-University Münster, 59073 Hamm, Germany; (I.S.); (C.E.); (M.L.); (H.W.S.); (H.M.S.); (T.F.)
| | - Ralf Stroop
- Department of Medicine, Faculty of Health, Witten/Herdecke University, 58448 Witten, Germany;
| |
Collapse
|
31
|
Gueziri HE, Yan CXB, Collins DL. Open-source software for ultrasound-based guidance in spinal fusion surgery. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:3353-3368. [PMID: 32907772 DOI: 10.1016/j.ultrasmedbio.2020.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 07/10/2020] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
Spinal instrumentation and surgical manipulations may cause loss of navigation accuracy requiring an efficient re-alignment of the patient anatomy with pre-operative images during surgery. While intra-operative ultrasound (iUS) guidance has shown clear potential to reduce surgery time, compared with clinical computed tomography (CT) guidance, rapid registration aiming to correct for patient misalignment has not been addressed. In this article, we present an open-source platform for pedicle screw navigation using iUS imaging. The alignment method is based on rigid registration of CT to iUS vertebral images and has been designed for fast and fully automatic patient re-alignment in the operating room. Two steps are involved: first, we use the iUS probe's trajectory to achieve an initial coarse registration; then, the registration transform is refined by simultaneously optimizing gradient orientation alignment and mean of iUS intensities passing through the CT-defined posterior surface of the vertebra. We evaluated our approach on a lumbosacral section of a porcine cadaver with seven vertebral levels. We achieved a median target registration error of 1.47 mm (100% success rate, defined by a target registration error <2 mm) when applying the probe's trajectory initial alignment. The approach exhibited high robustness to partial visibility of the vertebra with success rates of 89.86% and 88.57% when missing either the left or right part of the vertebra and robustness to initial misalignments with a success rate of 83.14% for random starts within ±20° rotation and ±20 mm translation. Our graphics processing unit implementation achieves an efficient registration time under 8 s, which makes the approach suitable for clinical application.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.
| | - Charles X B Yan
- Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - D Louis Collins
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
32
|
Winkler-Schwartz A, Yilmaz R, Tran DH, Gueziri HE, Ying B, Tuznik M, Fonov V, Collins L, Rudko DA, Li J, Debergue P, Pazos V, Del Maestro R. Creating a Comprehensive Research Platform for Surgical Technique and Operative Outcome in Primary Brain Tumor Neurosurgery. World Neurosurg 2020; 144:e62-e71. [DOI: 10.1016/j.wneu.2020.07.209] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 07/26/2020] [Accepted: 07/28/2020] [Indexed: 02/05/2023]
|
33
|
Rapid Eye Movement Sleep Sawtooth Waves Are Associated with Widespread Cortical Activations. J Neurosci 2020; 40:8900-8912. [PMID: 33055279 DOI: 10.1523/jneurosci.1586-20.2020] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/18/2020] [Accepted: 10/06/2020] [Indexed: 11/21/2022] Open
Abstract
Sawtooth waves (STW) are bursts of frontocentral slow oscillations recorded in the scalp electroencephalogram (EEG) during rapid eye movement (REM) sleep. Little is known about their cortical generators and functional significance. Stereo-EEG performed for presurgical epilepsy evaluation offers the unique possibility to study neurophysiology in situ in the human brain. We investigated intracranial correlates of scalp-detected STW in 26 patients (14 women) undergoing combined stereo-EEG/polysomnography. We visually marked STW segments in scalp EEG and selected stereo-EEG channels exhibiting normal activity for intracranial analyses. Channels were grouped in 30 brain regions. The spectral power in each channel and frequency band was computed during STW and non-STW control segments. Ripples (80-250 Hz) were automatically detected during STW and control segments. The spectral power in the different frequency bands and the ripple rates were then compared between STW and control segments in each brain region. An increase in 2-4 Hz power during STW segments was found in all brain regions, except the occipital lobe, with large effect sizes in the parietotemporal junction, the lateral and orbital frontal cortex, the anterior insula, and mesiotemporal structures. A widespread increase in high-frequency activity, including ripples, was observed concomitantly, involving the sensorimotor cortex, associative areas, and limbic structures. This distribution showed a high spatiotemporal heterogeneity. Our results suggest that STW are associated with widely distributed, but locally regulated REM sleep slow oscillations. By driving fast activities, STW may orchestrate synchronized reactivations of multifocal activities, allowing tagging of complex representations necessary for REM sleep-dependent memory consolidation.SIGNIFICANCE STATEMENT Sawtooth waves (STW) present as scalp electroencephalographic (EEG) bursts of slow waves contrasting with the low-voltage fast desynchronized activity of REM sleep. Little is known about their cortical origin and function. Using combined stereo-EEG/polysomnography possible only in the human brain during presurgical epilepsy evaluation, we explored the intracranial correlates of STW. We found that a large set of regions in the parietal, frontal, and insular cortices shows increases in 2-4 Hz power during scalp EEG STW, that STW are associated with a strong and widespread increase in high frequencies, and that these slow and fast activities exhibit a high spatiotemporal heterogeneity. These electrophysiological properties suggest that STW may be involved in cognitive processes during REM sleep.
Collapse
|
34
|
Liu T, Tai Y, Zhao C, Wei L, Zhang J, Pan J, Shi J. Augmented reality in neurosurgical navigation: a survey. Int J Med Robot 2020; 16:e2160. [PMID: 32890440 DOI: 10.1002/rcs.2160] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/19/2020] [Accepted: 08/29/2020] [Indexed: 11/12/2022]
Abstract
BACKGROUND Neurosurgery has exceptionally high requirements for minimally invasive and safety. This survey attempts to analyze the practical application of AR in neurosurgical navigation. Also, this survey describes future trends in augmented reality neurosurgical navigation systems. METHODS In this survey, we searched related keywords "augmented reality", "virtual reality", "neurosurgery", "surgical simulation", "brain tumor surgery", "neurovascular surgery", "temporal bone surgery", and "spinal surgery" through Google Scholar, World Neurosurgery, PubMed and Science Direct. We collected 85 articles published over the past five years in areas related to this survey. RESULTS Detailed study has been conducted on the application of AR in neurosurgery and found that AR is constantly improving the overall efficiency of doctor training and treatment, which can help neurosurgeons learn and practice surgical procedures with zero risks. CONCLUSIONS Neurosurgical navigation is essential in neurosurgery. Despite certain technical limitations, it is still a necessary tool for the pursuit of maximum security and minimal intrusiveness. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Tao Liu
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Yonghang Tai
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Chengming Zhao
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Lei Wei
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC, Australia
| | - Jun Zhang
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Junsheng Shi
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| |
Collapse
|
35
|
Xiao Y, Lau JC, Hemachandra D, Gilmore G, Khan AR, Peters TM. Image Guidance in Deep Brain Stimulation Surgery to Treat Parkinson's Disease: A Comprehensive Review. IEEE Trans Biomed Eng 2020; 68:1024-1033. [PMID: 32746050 DOI: 10.1109/tbme.2020.3006765] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deep brain stimulation (DBS) is an effective therapy as an alternative to pharmaceutical treatments for Parkinson's disease (PD). Aside from factors such as instrumentation, treatment plans, and surgical protocols, the success of the procedure depends heavily on the accurate placement of the electrode within the optimal therapeutic targets while avoiding vital structures that can cause surgical complications and adverse neurologic effects. Although specific surgical techniques for DBS can vary, interventional guidance with medical imaging has greatly contributed to the development, outcomes, and safety of the procedure. With rapid development in novel imaging techniques, computational methods, and surgical navigation software, as well as growing insights into the disease and mechanism of action of DBS, modern image guidance is expected to further enhance the capacity and efficacy of the procedure in treating PD. This article surveys the state-of-the-art techniques in image-guided DBS surgery to treat PD, and discusses their benefits and drawbacks, as well as future directions on the topic.
Collapse
|
36
|
Automatic extraction of vertebral landmarks from ultrasound images: A pilot study. Comput Biol Med 2020; 122:103838. [DOI: 10.1016/j.compbiomed.2020.103838] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 05/12/2020] [Accepted: 05/26/2020] [Indexed: 11/17/2022]
|
37
|
Virtual Reality in Neurosurgery: "Can You See It?"-A Review of the Current Applications and Future Potential. World Neurosurg 2020; 141:291-298. [PMID: 32561486 DOI: 10.1016/j.wneu.2020.06.066] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 06/07/2020] [Accepted: 06/07/2020] [Indexed: 01/16/2023]
Abstract
Virtual reality (VR) technology had its early development in the 1960s in the U.S. Air Force and has since evolved into a budding area of scientific research with many practical medical purposes. From medical education to resident training to the operating room, VR has provided tangible benefits to learners and trainees and has also improved surgery through enhanced preoperative planning and efficiency in the operating room. Neurosurgery is a particularly complex field of medicine, in which VR has blossomed into a tool with great usefulness and promise. In spinal surgery, VR simulation has allowed for the practice of innovative minimally invasive procedures. In cranial surgery, VR has excelled in helping neurosurgeons design unique patient-specific approaches to particularly challenging tumor excisions. In neurovascular surgery, VR has helped trainees practice and perfect procedures requiring high levels of dexterity to minimize intraoperative complications and patient radiation exposure. In peripheral nerve surgery, VR has allowed surgeons to gain increased practice and comfort with complex microsurgeries such as nerve decompression. Overall, VR continues to increase its potential in neurosurgery and is poised to benefit patients in a multitude of ways. Although cost-prohibiting, legal, and ethical challenges surrounding this technology must be considered, future research and more direct quantitative outcome comparisons between standard and VR-supplemented procedures would help provide more direction regarding the feasibility of widespread adoption of VR technology in neurosurgery.
Collapse
|
38
|
Thompson S, Dowrick T, Ahmad M, Xiao G, Koo B, Bonmati E, Kahl K, Clarkson MJ. SciKit-Surgery: compact libraries for surgical navigation. Int J Comput Assist Radiol Surg 2020; 15:1075-1084. [PMID: 32436132 PMCID: PMC7316849 DOI: 10.1007/s11548-020-02180-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 04/22/2020] [Indexed: 12/03/2022]
Abstract
Purpose This paper introduces the SciKit-Surgery libraries, designed to enable rapid development of clinical applications for image-guided interventions. SciKit-Surgery implements a family of compact, orthogonal, libraries accompanied by robust testing, documentation, and quality control. SciKit-Surgery libraries can be rapidly assembled into testable clinical applications and subsequently translated to production software without the need for software reimplementation. The aim is to support translation from single surgeon trials to multicentre trials in under 2 years. Methods At the time of publication, there were 13 SciKit-Surgery libraries provide functionality for visualisation and augmented reality in surgery, together with hardware interfaces for video, tracking, and ultrasound sources. The libraries are stand-alone, open source, and provide Python interfaces. This design approach enables fast development of robust applications and subsequent translation. The paper compares the libraries with existing platforms and uses two example applications to show how SciKit-Surgery libraries can be used in practice. Results Using the number of lines of code and the occurrence of cross-dependencies as proxy measurements of code complexity, two example applications using SciKit-Surgery libraries are analysed. The SciKit-Surgery libraries demonstrate ability to support rapid development of testable clinical applications. By maintaining stricter orthogonality between libraries, the number, and complexity of dependencies can be reduced. The SciKit-Surgery libraries also demonstrate the potential to support wider dissemination of novel research. Conclusion The SciKit-Surgery libraries utilise the modularity of the Python language and the standard data types of the NumPy package to provide an easy-to-use, well-tested, and extensible set of tools for the development of applications for image-guided interventions. The example application built on SciKit-Surgery has a simpler dependency structure than the same application built using a monolithic platform, making ongoing clinical translation more feasible.
Collapse
Affiliation(s)
- Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Mian Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Goufang Xiao
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Bongjin Koo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Ester Bonmati
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Kim Kahl
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| |
Collapse
|
39
|
Hu MH, Chiang CC, Wang ML, Wu NY, Lee PY. Clinical feasibility of the augmented reality computer-assisted spine surgery system for percutaneous vertebroplasty. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2020; 29:1590-1596. [DOI: 10.1007/s00586-020-06417-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 02/16/2020] [Accepted: 04/11/2020] [Indexed: 12/15/2022]
|
40
|
Léger É, Reyes J, Drouin S, Popa T, Hall JA, Collins DL, Kersten-Oertel M. MARIN: an open-source mobile augmented reality interactive neuronavigation system. Int J Comput Assist Radiol Surg 2020; 15:1013-1021. [DOI: 10.1007/s11548-020-02155-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 12/20/2022]
|
41
|
Latreille V, von Ellenrieder N, Peter-Derex L, Dubeau F, Gotman J, Frauscher B. The human K-complex: Insights from combined scalp-intracranial EEG recordings. Neuroimage 2020; 213:116748. [PMID: 32194281 DOI: 10.1016/j.neuroimage.2020.116748] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 01/18/2020] [Accepted: 03/13/2020] [Indexed: 10/24/2022] Open
Abstract
Sleep spindles and K-complexes (KCs) are a hallmark of N2 sleep. While the functional significance of spindles is comparatively well investigated, there is still ongoing debate about the role of the KC: it is unclear whether it is a cortical response to an arousing stimulus (either external or internal) or whether it has sleep-promoting properties. Invasive intracranial EEG recordings from individuals with drug-resistant epilepsy offer a unique opportunity to study in-situ human brain physiology. To better understand the function of the KC, we aimed to (i) investigate the intracranial correlates of spontaneous scalp KCs, and (ii) compare the intracranial activity of scalp KCs associated or not with arousals. Whole-night recordings from adults with drug-resistant focal epilepsy who underwent combined intracranial-scalp EEG for pre-surgical evaluation at the Montreal Neurological Institute between 2010 and 2018 were selected. KCs were visually marked in the scalp and categorized according to the presence of microarousals: (i) Pre-microarousal KCs; (ii) KCs during an ongoing microarousal; and (iii) KCs without microarousal. Power in different spectral bands was computed to compare physiological intracranial EEG activity at the time of scalp KCs relative to the background, as well as to compare microarousal subcategories. A total of 1198 scalp KCs selected from 40 subjects were analyzed, resulting in 32,504 intracranial KC segments across 992 channels. Forty-seven percent of KCs were without microarousal, 30% were pre-microarousal, and 23% occurred during microarousals. All scalp KCs were accompanied by widespread cortical increases in delta band power (0.3-4 Hz) relative to the background: the highest percentages were observed in the parietal (60-65%) and frontal cortices (52-58%). Compared to KCs without microarousal, pre-microarousal KCs were accompanied by increases (66%) in beta band power (16-30 Hz) in the motor cortex, which was present before the peak of the KC. In addition, spatial distribution of spectral power changes following each KC without microarousal revealed that certain brain regions were associated with increases in delta power (25-62%) or decreases in alpha/beta power (11-24%), suggesting a sleep-promoting pattern, whereas others were accompanied by increases of higher frequencies (12-27%), suggesting an arousal-related pattern. This study shows that KCs can be generated across widespread cortical areas. Interestingly, the motor cortex shows awake-like EEG activity before the onset of KCs followed by microarousals. Our findings also highlight region-specific sleep- or arousal-promoting responses following KCs, suggesting a dual role for the human KC.
Collapse
Affiliation(s)
- Véronique Latreille
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada
| | - Nicolás von Ellenrieder
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada
| | - Laure Peter-Derex
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada
| | - François Dubeau
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada
| | - Jean Gotman
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada
| | - Birgit Frauscher
- Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, H3A 2B4, Canada.
| |
Collapse
|
42
|
Strategy Innovation, Intellectual Capital Management, and the Future of Healthcare: The Case of Kiron by Nucleode. CONTRIBUTIONS TO MANAGEMENT SCIENCE 2020. [DOI: 10.1007/978-3-030-40390-4_9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
43
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
44
|
Frisken S, Luo M, Juvekar P, Bunevicius A, Machado I, Unadkat P, Bertotti MM, Toews M, Wells WM, Miga MI, Golby AJ. A comparison of thin-plate spline deformation and finite element modeling to compensate for brain shift during tumor resection. Int J Comput Assist Radiol Surg 2019; 15:75-85. [PMID: 31444624 DOI: 10.1007/s11548-019-02057-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 08/14/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Collapse
Affiliation(s)
- Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.
| | - Ma Luo
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Parikshit Juvekar
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Adomas Bunevicius
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Ines Machado
- Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, Portugal
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Melina M Bertotti
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Matt Toews
- Département de Génie des Systems, Ecole de Technologie Superieure, Montreal, Canada
| | - William M Wells
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA.,Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Vanderbilt Institute for Surgery and Engineering, Vanderbilt University, Nashville, TN, USA
| | - Alexandra J Golby
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
45
|
Toward real-time rigid registration of intra-operative ultrasound with preoperative CT images for lumbar spinal fusion surgery. Int J Comput Assist Radiol Surg 2019; 14:1933-1943. [DOI: 10.1007/s11548-019-02020-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 06/24/2019] [Indexed: 10/26/2022]
|
46
|
Frauscher B, von Ellenrieder N, Zelmann R, Doležalová I, Minotti L, Olivier A, Hall J, Hoffmann D, Nguyen DK, Kahane P, Dubeau F, Gotman J. Atlas of the normal intracranial electroencephalogram: neurophysiological awake activity in different cortical areas. Brain 2019; 141:1130-1144. [PMID: 29506200 DOI: 10.1093/brain/awy035] [Citation(s) in RCA: 142] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Accepted: 01/01/2018] [Indexed: 11/13/2022] Open
Abstract
In contrast to scalp EEG, our knowledge of the normal physiological intracranial EEG activity is scarce. This multicentre study provides an atlas of normal intracranial EEG of the human brain during wakefulness. Here we present the results of power spectra analysis during wakefulness. Intracranial electrodes are placed in or on the brain of epilepsy patients when candidates for surgical treatment and non-invasive approaches failed to sufficiently localize the epileptic focus. Electrode contacts are usually in cortical regions showing epileptic activity, but some are placed in normal regions, at distance from the epileptogenic zone or lesion. Intracranial EEG channels defined using strict criteria as very likely to be in healthy brain regions were selected from three tertiary epilepsy centres. All contacts were localized in a common stereotactic space allowing the accumulation and superposition of results from many subjects. Sixty-second artefact-free sections during wakefulness were selected. Power spectra were calculated for 38 brain regions, and compared to a set of channels with no spectral peaks in order to identify significant peaks in the different regions. A total of 1785 channels with normal brain activity from 106 patients were identified. There were on average 2.7 channels per cm3 of cortical grey matter. The number of contacts per brain region averaged 47 (range 6-178). We found significant differences in the spectral density distributions across the different brain lobes, with beta activity in the frontal lobe (20-24 Hz), a clear alpha peak in the occipital lobe (9.25-10.25 Hz), intermediate alpha (8.25-9.25 Hz) and beta (17-20 Hz) frequencies in the parietal lobe, and lower alpha (7.75-8.25 Hz) and delta (0.75-2.25 Hz) peaks in the temporal lobe. Some cortical regions showed a specific electrophysiological signature: peaks present in >60% of channels were found in the precentral gyrus (lateral: peak frequency range, 20-24 Hz; mesial: 24-30 Hz), opercular part of the inferior frontal gyrus (20-24 Hz), cuneus (7.75-8.75 Hz), and hippocampus (0.75-1.25 Hz). Eight per cent of all analysed channels had more than one spectral peak; these channels were mostly recording from sensory and motor regions. Alpha activity was not present throughout the occipital lobe, and some cortical regions showed peaks in delta activity during wakefulness. This is the first atlas of normal intracranial EEG activity; it includes dense coverage of all cortical regions in a common stereotactic space, enabling direct comparisons of EEG across subjects. This atlas provides a normative baseline against which clinical EEGs and experimental results can be compared. It is provided as an open web resource (https://mni-open-ieegatlas. RESEARCH mcgill.ca).
Collapse
Affiliation(s)
- Birgit Frauscher
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.,Department of Medicine and Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | | | - Rina Zelmann
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.,Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Irena Doležalová
- Brno Epilepsy Center, First Department of Neurology, St. Anne's University Hospital and Faculty of Medicine, Masaryk University Brno, Czech Republic
| | - Lorella Minotti
- Department of Neurology, Grenoble-Alpes University Hospital and Grenoble-Alpes University, Grenoble, France
| | - André Olivier
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Jeffery Hall
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Dominique Hoffmann
- Department of Neurology, Grenoble-Alpes University Hospital and Grenoble-Alpes University, Grenoble, France
| | - Dang Khoa Nguyen
- Centre hospitalier de l'Université de Montréal - Hôpital Notre-Dame, Montréal, Québec, Canada
| | - Philippe Kahane
- Department of Neurology, Grenoble-Alpes University Hospital and Grenoble-Alpes University, Grenoble, France
| | - François Dubeau
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Jean Gotman
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
47
|
Mikhail M, Mithani K, Ibrahim GM. Presurgical and Intraoperative Augmented Reality in Neuro-Oncologic Surgery: Clinical Experiences and Limitations. World Neurosurg 2019; 128:268-276. [PMID: 31103764 DOI: 10.1016/j.wneu.2019.04.256] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 04/29/2019] [Accepted: 04/30/2019] [Indexed: 02/06/2023]
Abstract
Virtual reality (VR) and augmented reality (AR) represent novel adjuncts for neurosurgical planning in neuro-oncology. In addition to established use in surgical and medical training, VR/AR are gaining traction for clinical use preoperatively and intraoperatively. To understand the utility of VR/AR in the clinical setting, we conducted a literature search in Ovid MEDLINE and EMBASE with various search terms designed to capture the use of VR/AR in neurosurgical procedures for resection of cranial tumors. The search retrieved 302 articles, of which 35 were subjected to full-text review; 19 full-text articles were included in the review. Key findings highlighted by the individual authors were extracted and summarized into themes to present the value of VR/AR in the clinical setting. These studies included various VR/AR systems applied to surgeries involving heterogeneous pathologies and outcome measures. Overall, VR/AR were found to be qualitatively advantageous due to enhanced visualization of complex anatomy and improved intraoperative lesion localization. When these technologies were compared with existing neuronavigation systems, quantitative clinical benefits were also reported. The capacity to visualize three-dimensional images superimposed on patient anatomy is a potentially valuable tool in complex neurosurgical environments. Surgical limitations may be addressed through future advances in image registration and tracking as well as intraoperatively acquired imaging with the ability to yield real-time virtual models.
Collapse
Affiliation(s)
- Mirriam Mikhail
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
| | - Karim Mithani
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - George M Ibrahim
- Division of Neurosurgery, Department of Surgery, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
48
|
Plazak J, DiGiovanni DA, Collins DL, Kersten-Oertel M. Cognitive load associations when utilizing auditory display within image-guided neurosurgery. Int J Comput Assist Radiol Surg 2019; 14:1431-1438. [PMID: 30997635 DOI: 10.1007/s11548-019-01970-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Accepted: 04/04/2019] [Indexed: 11/26/2022]
Abstract
PURPOSE The combination of data visualization and auditory display (e.g., sonification) has been shown to increase accuracy, and reduce perceived difficulty, within 3D navigation tasks. While accuracy within such tasks can be measured in real time, subjective impressions about the difficulty of a task are more elusive to obtain. Prior work utilizing electrophysiology (EEG) has found robust support that cognitive load and working memory can be monitored in real time using EEG data. METHODS In this study, we replicated a 3D navigation task (within the context of image-guided surgery) while recording data pertaining to participants' cognitive load through the use of EEG relative alpha-band weighting data. Specifically, 13 subjects navigated a tracked surgical tool to randomly placed 3D virtual locations on a CT cerebral angiography volume while being aided by visual, aural, or both visual and aural feedback. During the study EEG data were captured from the participants, and after the study a NASA TLX questionnaire was filled out by the subjects. In addition to replicating an existing experimental design on auditory display within image-guided neurosurgery, our primary aim sought to determine whether EEG-based markers of cognitive load mirrored subjective ratings of task difficulty RESULTS : Similar to existing literature, our study found evidence consistent with the hypothesis that auditory display can increase the accuracy of navigating to a specified target. We also found significant differences in cognitive working load across different feedback modalities, but none of which supported the experiments hypotheses. Finally, we found mixed results regarding the relationship between real-time measurements of cognitive workload and a posteriori subjective impressions of task difficulty. CONCLUSIONS Although we did not find a significant correlation between the subjective and physiological measurements, differences in cognitive working load were found. As well, our study further supports the use of auditory display in image-guided surgery.
Collapse
Affiliation(s)
- Joseph Plazak
- Gina Cody School of Engineering and Computer Science, Concordia University, EV 3.301, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada
| | | | - D Louis Collins
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Marta Kersten-Oertel
- Gina Cody School of Engineering and Computer Science, Concordia University, EV 3.301, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada.
| |
Collapse
|
49
|
|
50
|
Ganau M, Ligarotti GK, Apostolopoulos V. Real-time intraoperative ultrasound in brain surgery: neuronavigation and use of contrast-enhanced image fusion. Quant Imaging Med Surg 2019; 9:350-358. [PMID: 31032183 DOI: 10.21037/qims.2019.03.06] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Mario Ganau
- Department of Neurosurgery, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Gianfranco K Ligarotti
- Department of Neurosurgery, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | | |
Collapse
|