1
|
Monfaredi R, Concepcion-Gonzalez A, Acosta Julbe J, Fischer E, Hernandez-Herrera G, Cleary K, Oluigbo C. Automatic Path-Planning Techniques for Minimally Invasive Stereotactic Neurosurgical Procedures-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:5238. [PMID: 39204935 PMCID: PMC11359713 DOI: 10.3390/s24165238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 08/05/2024] [Accepted: 08/08/2024] [Indexed: 09/04/2024]
Abstract
This review systematically examines the recent research from the past decade on diverse path-planning algorithms tailored for stereotactic neurosurgery applications. Our comprehensive investigation involved a thorough search of scholarly papers from Google Scholar, PubMed, IEEE Xplore, and Scopus, utilizing stringent inclusion and exclusion criteria. The screening and selection process was meticulously conducted by a multidisciplinary team comprising three medical students, robotic experts with specialized knowledge in path-planning techniques and medical robotics, and a board-certified neurosurgeon. Each selected paper was reviewed in detail, and the findings were synthesized and reported in this review. The paper is organized around three different types of intervention tools: straight needles, steerable needles, and concentric tube robots. We provide an in-depth analysis of various path-planning algorithms applicable to both single and multi-target scenarios. Multi-target planning techniques are only discussed for straight tools as there is no published work on multi-target planning for steerable needles and concentric tube robots. Additionally, we discuss the imaging modalities employed, the critical anatomical structures considered during path planning, and the current status of research regarding its translation to clinical human studies. To the best of our knowledge and as a conclusion from this systematic review, this is the first review paper published in the last decade that reports various path-planning techniques for different types of tools for minimally invasive neurosurgical applications. Furthermore, this review outlines future trends and identifies existing technology gaps within the field. By highlighting these aspects, we aim to provide a comprehensive overview that can guide future research and development in path planning for stereotactic neurosurgery, ultimately contributing to the advancement of safer and more effective neurosurgical procedures.
Collapse
Affiliation(s)
- Reza Monfaredi
- Sheikh Zayed Institute of Pediatrics Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (E.F.); (K.C.)
- Department of Pediatrics and Radiology, George Washington University, Washington, DC 20037, USA
| | - Alondra Concepcion-Gonzalez
- School of Medicine and Health Sciences, George Washington University School of Medicine, Washington, DC 20052, USA;
| | - Jose Acosta Julbe
- Department of Orthopaedic Surgery & Orthopaedic and Arthritis Center for Outcomes Research, Brigham and Women’s Hospital, Boston, MA 02115, USA;
| | - Elizabeth Fischer
- Sheikh Zayed Institute of Pediatrics Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (E.F.); (K.C.)
| | | | - Kevin Cleary
- Sheikh Zayed Institute of Pediatrics Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (E.F.); (K.C.)
- Department of Pediatrics and Radiology, George Washington University, Washington, DC 20037, USA
| | - Chima Oluigbo
- Sheikh Zayed Institute of Pediatrics Surgical Innovation, Children’s National Hospital, Washington, DC 20010, USA; (E.F.); (K.C.)
- Department of Neurology and Pediatrics, George Washington University School of Medicine, Washington, DC 20052, USA
| |
Collapse
|
2
|
Villa M, Sancho J, Rosa G, Chavarrias M, Juarez E, Sanz C. HyperMRI: hyperspectral and magnetic resonance fusion methodology for neurosurgery applications. Int J Comput Assist Radiol Surg 2024; 19:1367-1374. [PMID: 38761318 PMCID: PMC11230967 DOI: 10.1007/s11548-024-03102-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Magnetic resonance imaging (MRI) is a common technique in image-guided neurosurgery (IGN). Recent research explores the integration of methods like ultrasound and tomography, among others, with hyperspectral (HS) imaging gaining attention due to its non-invasive real-time tissue classification capabilities. The main challenge is the registration process, often requiring manual intervention. This work introduces an automatic, markerless method for aligning HS images with MRI. METHODS This work presents a multimodal system that combines RGB-Depth (RGBD) and HS cameras. The RGBD camera captures the patient's facial geometry, which is used for registration with the preoperative MR through ICP. Once MR-depth registration is complete, the integration of HS data is achieved using a calibrated homography transformation. The incorporation of external tracking with a novel calibration method allows camera mobility from the registration position to the craniotomy area. This methodology streamlines the fusion of RGBD, HS and MR images within the craniotomy area. RESULTS Using the described system and an anthropomorphic phantom head, the system has been characterised by registering the patient's face in 25 positions and 5 positions resulted in a fiducial registration error of 1.88 ± 0.19 mm and a target registration error of 4.07 ± 1.28 mm, respectively. CONCLUSIONS This work proposes a new methodology to automatically register MR and HS information with a sufficient accuracy. It can support the neurosurgeons to guide the diagnosis using multimodal data over an augmented reality representation. However, in its preliminary prototype stage, this system exhibits significant promise, driven by its cost-effectiveness and user-friendly design.
Collapse
Affiliation(s)
- Manuel Villa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Jaime Sancho
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Gonzalo Rosa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | | | - Eduardo Juarez
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain.
| | - Cesar Sanz
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| |
Collapse
|
3
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M. Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
| | - L. Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Pierre A. Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P. C. van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
4
|
Augmented reality during parotid surgery: real-life evaluation of voice control of a head mounted display. Eur Arch Otorhinolaryngol 2023; 280:2043-2049. [PMID: 36269364 PMCID: PMC9988782 DOI: 10.1007/s00405-022-07699-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 10/08/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE Augmented Reality can improve surgical planning and performance in parotid surgery. For easier application we implemented a voice control manual for our augmented reality system. The aim of the study was to evaluate the feasibility of the voice control in real-life situations. METHODS We used the HoloLens 1® (Microsoft Corporation) with a special speech recognition software for parotid surgery. The evaluation took place in a audiometry cubicle and during real surgical procedures. Voice commands were used to display various 3D structures of the patient with the HoloLens 1®. Commands had different variations (male/female, 65 dB SPL)/louder, various structures). RESULTS In silence, 100% of commands were recognized. If the volume of the operation room (OR) background noise exceeds 42 dB, the recognition rate decreases significantly, and it drops below 40% at > 60 dB SPL. With constant speech volume at 65 dB SPL male speakers had a significant better recognition rate than female speakers (p = 0.046). Higher speech volumes can compensate this effect. The recognition rate depends on the type of background noise. Mixed OR noise (52 dB(A)) reduced the detection rate significantly compared to single suction noise at 52 dB(A) (p ≤ 0.00001). The recognition rate was significantly better in the OR than in the audio cubicle (p = 0.00013 both genders, 0.0086 female, and 0.0036 male). CONCLUSIONS The recognition rate of voice commands can be enhanced by increasing the speech volume and by singularizing ambient noises. The detection rate depends on the loudness of the OR noise. Male voices are understood significantly better than female voices.
Collapse
|
5
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
6
|
Noecker AM, Mlakar J, Petersen MV, Griswold MA, McIntyre CC. Holographic visualization for stereotactic neurosurgery research. Brain Stimul 2023; 16:411-414. [PMID: 36739892 PMCID: PMC10750300 DOI: 10.1016/j.brs.2023.02.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 02/01/2023] [Indexed: 02/05/2023] Open
Abstract
Background: Stereotactic neurosurgical planning for the placement of depth electrodes requires the integration of wide-ranging 3D datasets on the anatomy of the patient. Objective: Our goal was to create an interactive group-based holographic visualization tool (HoloSNS) that facilitates evaluation of depth electrode positioning relative to the available medical imaging data, as well as models of the anatomical nuclei and structural connectivity of the brain. Methods: HoloSNS is currently designed to run on the HoloLens 2 platform, and was developed using the Unity Game Engine and the Mixed Reality Toolkit from Microsoft. Results: HoloSNS currently supports research analyses with deep brain stimulation (DBS) and/or stereo-electroencephalography (SEEG) electrodes. Two example software applications (HoloDBS and HoloSEEG) are available for free download on the Microsoft App Store. Conclusions: HoloSNS is the latest culmination of our efforts to integrate advances in brain imaging data, intracranial electrode modeling, and advanced visualization techniques to enhance stereotactic neurosurgery research.
Collapse
Affiliation(s)
- Angela M Noecker
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Jeffrey Mlakar
- Interactive Commons, Case Western Reserve University, Cleveland, OH, USA
| | - Mikkel V Petersen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Mark A Griswold
- Interactive Commons, Case Western Reserve University, Cleveland, OH, USA
| | - Cameron C McIntyre
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Department of Biomedical Engineering, Duke University, Durham, NC, USA; Department of Neurosurgery, Duke University, Durham, NC, USA.
| |
Collapse
|
7
|
Use of Mixed Reality in Neuro-Oncology: A Single Centre Experience. Life (Basel) 2023; 13:life13020398. [PMID: 36836755 PMCID: PMC9965132 DOI: 10.3390/life13020398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/25/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
(1) Background: Intra-operative neuronavigation is currently an essential component to most neurosurgical operations. Recent progress in mixed reality (MR) technology has attempted to overcome the disadvantages of the neuronavigation systems. We present our experience using the HoloLens 2 in neuro-oncology for both intra- and extra-axial tumours. (2) Results: We describe our experience with three patients who underwent tumour resection. We evaluated surgeon experience, accuracy of superimposed 3D image in tumour localisation with standard neuronavigation both pre- and intra-operatively. Surgeon training and usage for HoloLens 2 was short and easy. The process of image overlay was relatively straightforward for the three cases. Registration in prone position with a conventional neuronavigation system is often difficult, which was easily overcome during use of HoloLens 2. (3) Conclusion: Although certain limitations were identified, the authors feel that this system is a feasible alternative device for intra-operative visualization of neurosurgical pathology. Further studies are being planned to assess its accuracy and suitability across various surgical disciplines.
Collapse
|
8
|
Khan T, Biehl JT, Andrews EG, Babichenko D. A systematic comparison of the accuracy of monocular RGB tracking and LiDAR for neuronavigation. Healthc Technol Lett 2022; 9:91-101. [PMID: 36514478 PMCID: PMC9731545 DOI: 10.1049/htl2.12036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 09/02/2022] [Accepted: 09/05/2022] [Indexed: 12/16/2022] Open
Abstract
With the advent of augmented reality (AR), the use of AR-guided systems in the field of medicine has gained traction. However, the wide-scale adaptation of these systems requires highly accurate and reliable tracking. In this work, the tracking accuracy of two technology platforms, LiDAR and Vuforia, are developed and rigorously tested for a catheter placement neurological procedure. Several experiments (900) are performed for each technology across various combinations of catheter lengths and insertion trajectories. This analysis shows that the LiDAR platform outperformed Vuforia; which is the state-of-the-art in monocular RGB tracking solutions. LiDAR had 75% less radial distance error and 26% less angle deviation error. Results provide key insights into the value and utility of LiDAR-based tracking in AR guidance systems.
Collapse
Affiliation(s)
- Talha Khan
- School of Computing and InformationUniversity of PittsburghPittsburghPAUSA
| | - Jacob T. Biehl
- School of Computing and InformationUniversity of PittsburghPittsburghPAUSA
| | - Edward G. Andrews
- Department of Neurological SurgerySchool of MedicineUniversity of PittsburghPittsburghPAUSA
| | - Dmitriy Babichenko
- School of Computing and InformationUniversity of PittsburghPittsburghPAUSA
| |
Collapse
|
9
|
Boaro A, Moscolo F, Feletti A, Polizzi G, Nunes S, Siddi F, Broekman M, Sala F. Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN & SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
Abstract
Introduction The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations. Results The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
Collapse
Affiliation(s)
- A. Boaro
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Moscolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - A. Feletti
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - G.M.V. Polizzi
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - S. Nunes
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Siddi
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
| | - M.L.D. Broekman
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
- Department of Neurosurgery, Leiden University Medical Center, Leiden, Zuid-Holland, the Netherlands
| | - F. Sala
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| |
Collapse
|
10
|
Gretzinger S, Schmieg B, Guthausen G, Hubbuch J. Virtual Reality as Tool for Bioprinting Quality Inspection: A Proof of Principle. Front Bioeng Biotechnol 2022; 10:895842. [PMID: 35757809 PMCID: PMC9218671 DOI: 10.3389/fbioe.2022.895842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/06/2022] [Indexed: 11/13/2022] Open
Abstract
As virtual reality (VR) has drastically evolved over the past few years, the field of applications of VR flourished way beyond the gaming industry. While commercial VR solutions might be available, there is a need to develop a workflow for specific applications. Bioprinting represents such an example. Here, complex 3D data is generated and needs to be visualized in the context of quality control. We demonstrate that the transfer to a commercially available VR software is possible by introducing an optimized workflow. In the present work, we developed a workflow for the visualization of the critical quality attribute (cQA) cell distribution in bioprinted (extrusion-based) samples in VR. The cQA cell distribution is directly influenced by the pre-processing step mixing of cell material in the bioink. Magnetic Resonance Imaging (MRI) was used as an analytical tool to generate spatially resolved 2.5 and 3D data of the bioprinted objects. A sample with poor quality in respect of the cQA cell distribution was identified as its inhomogeneous cell distribution could be displayed spatially resolved in VR. The described workflow facilitates the usage of VR as a tool for quality inspection in the field of bioprinting and represents a powerful tool for visualization of complex 3D MRI data.
Collapse
Affiliation(s)
- Sarah Gretzinger
- Institute of Functional Interfaces, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany.,Institute of Engineering in Life Sciences, Section IV: Molecular Separation Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Barbara Schmieg
- Institute of Functional Interfaces, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany.,Institute of Engineering in Life Sciences, Section IV: Molecular Separation Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Gisela Guthausen
- Institute of Mechanical Process Engineering and Mechanics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany.,Engler Bunte Institute Water Chemistry and Technology, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Jürgen Hubbuch
- Institute of Functional Interfaces, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany.,Institute of Engineering in Life Sciences, Section IV: Molecular Separation Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
11
|
A Dedicated Tool for Presurgical Mapping of Brain Tumors and Mixed-Reality Navigation During Neurosurgery. J Digit Imaging 2022; 35:704-713. [PMID: 35230562 PMCID: PMC9156583 DOI: 10.1007/s10278-022-00609-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 02/03/2022] [Accepted: 02/05/2022] [Indexed: 12/15/2022] Open
Abstract
Brain tumor surgery requires a delicate tradeoff between complete removal of neoplastic tissue while minimizing loss of brain function. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) have emerged as valuable tools for non-invasive assessment of human brain function and are now used to determine brain regions that should be spared to prevent functional impairment after surgery. However, image analysis requires different software packages, mainly developed for research purposes and often difficult to use in a clinical setting, preventing large-scale diffusion of presurgical mapping. We developed a specialized software able to implement an automatic analysis of multimodal MRI presurgical mapping in a single application and to transfer the results to the neuronavigator. Moreover, the imaging results are integrated in a commercially available wearable device using an optimized mixed-reality approach, automatically anchoring 3-dimensional holograms obtained from MRI with the physical head of the patient. This will allow the surgeon to virtually explore deeper tissue layers highlighting critical brain structures that need to be preserved, while retaining the natural oculo-manual coordination. The enhanced ergonomics of this procedure will significantly improve accuracy and safety of the surgery, with large expected benefits for health care systems and related industrial investors.
Collapse
|