1
|
Patel V, Chavda V. Intraoperative glioblastoma surgery-current challenges and clinical trials: An update. CANCER PATHOGENESIS AND THERAPY 2024; 2:256-267. [PMID: 39371095 PMCID: PMC11447313 DOI: 10.1016/j.cpt.2023.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 11/23/2023] [Accepted: 11/30/2023] [Indexed: 10/08/2024]
Abstract
Surgical excision is an important part of the multimodal therapy strategy for patients with glioblastoma, a very aggressive and invasive brain tumor. While major advances in surgical methods and technology have been accomplished, numerous hurdles remain in the field of glioblastoma surgery. The purpose of this literature review is to offer a thorough overview of the current challenges in glioblastoma surgery. We reviewed the difficulties associated with tumor identification and visualization, resection extent, neurological function preservation, tumor margin evaluation, and inclusion of sophisticated imaging and navigation technology. Understanding and resolving these challenges is critical in order to improve surgical results and, ultimately, patient survival.
Collapse
Affiliation(s)
- Vimal Patel
- Department of Pharmaceutics, Anand Pharmacy College, Anand, Gujarat 388001, India
| | - Vishal Chavda
- Department of Pathology, Stanford School of Medicine, Stanford University Medical Center, Stanford, CA 94305, USA
- Department of Medicine, Multispecialty, Trauma and ICCU Center, Sardar Hospital, Ahmedabad, Gujarat 382350, India
| |
Collapse
|
2
|
Prasad K, Fassler C, Miller A, Aweeda M, Pruthi S, Fusco JC, Daniel B, Miga M, Wu JY, Topf MC. More than meets the eye: Augmented reality in surgical oncology. J Surg Oncol 2024; 130:405-418. [PMID: 39155686 DOI: 10.1002/jso.27790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 07/09/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVES In the field of surgical oncology, there has been a desire for innovative techniques to improve tumor visualization, resection, and patient outcomes. Augmented reality (AR) technology superimposes digital content onto the real-world environment, enhancing the user's experience by blending digital and physical elements. A thorough examination of AR technology in surgical oncology has yet to be performed. METHODS A scoping review of intraoperative AR in surgical oncology was conducted according to the guidelines and recommendations of The Preferred Reporting Items for Systematic Review and Meta-analyzes Extension for Scoping Reviews (PRISMA-ScR) framework. All original articles examining the use of intraoperative AR during surgical management of cancer were included. Exclusion criteria included virtual reality applications only, preoperative use only, fluorescence, AR not specific to surgical oncology, and study design (reviews, commentaries, abstracts). RESULTS A total of 2735 articles were identified of which 83 were included. Most studies (52) were performed on animals or phantom models, while the remaining included patients. A total of 1112 intraoperative AR surgical cases were performed across the studies. The most common anatomic site was brain (20 articles), followed by liver (16), renal (9), and head and neck (8). AR was most often used for intraoperative navigation or anatomic visualization of tumors or critical structures but was also used to identify osteotomy or craniotomy planes. CONCLUSIONS AR technology has been applied across the field of surgical oncology to aid in localization and resection of tumors.
Collapse
Affiliation(s)
- Kavita Prasad
- Department of Otolaryngology-Head & Neck Surgery, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Carly Fassler
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Alexis Miller
- Department of Otolaryngology-Head & Neck Surgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Marina Aweeda
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Sumit Pruthi
- Department of Radiology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Joseph C Fusco
- Department of Pediatric Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Bruce Daniel
- Department of Radiology, Stanford Health Care, Palo Alto, California, USA
| | - Michael Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Michael C Topf
- Department of Otolaryngology-Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Aweeda M, Adegboye F, Yang SF, Topf MC. Enhancing Surgical Vision: Augmented Reality in Otolaryngology-Head and Neck Surgery. JOURNAL OF MEDICAL EXTENDED REALITY 2024; 1:124-136. [PMID: 39091667 PMCID: PMC11290041 DOI: 10.1089/jmxr.2024.0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/15/2024] [Indexed: 08/04/2024]
Abstract
Augmented reality (AR) technology has become widely established in otolaryngology-head and neck surgery. Over the past 20 years, numerous AR systems have been investigated and validated across the subspecialties, both in cadaveric and in live surgical studies. AR displays projected through head-mounted devices, microscopes, and endoscopes, most commonly, have demonstrated utility in preoperative planning, intraoperative guidance, and improvement of surgical decision-making. Specifically, they have demonstrated feasibility in guiding tumor margin resections, identifying critical structures intraoperatively, and displaying patient-specific virtual models derived from preoperative imaging, with millimetric accuracy. This review summarizes both established and emerging AR technologies, detailing how their systems work, what features they offer, and their clinical impact across otolaryngology subspecialties. As AR technology continues to advance, its integration holds promise for enhancing surgical precision, simulation training, and ultimately, improving patient outcomes.
Collapse
Affiliation(s)
- Marina Aweeda
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Feyisayo Adegboye
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Shiayin F. Yang
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Michael C. Topf
- Department of Otolaryngology—Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- School of Engineering, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M. Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
| | - L. Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Pierre A. Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P. C. van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
5
|
Domínguez-Velasco CF, Tello-Mata IE, Guinto-Nishimura G, Martínez-Hernández A, Alcocer-Barradas V, Pérez-Lomelí JS, Padilla-Castañeda MA. Augmented reality simulation as training model of ventricular puncture: Evidence in the improvement of the quality of punctures. Int J Med Robot 2023; 19:e2529. [PMID: 37272193 DOI: 10.1002/rcs.2529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 05/06/2023] [Accepted: 05/08/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND Ventricular puncture is a common procedure in neurosurgery and the first that resident must learn. Ongoing education is critical to improving patient outcomes. However, training at the expense of potential risk to patients warrants new and safer training methods for residents. METHODS An augmented reality (AR) simulator for the practice of ventricular punctures was designed. It consists of a navigation system with a virtual 3D projection of the anatomy over a 3D-printed patient model. Forty-eight participants from neurosurgery staff performed two free-hand ventricular punctures before and after a training session. RESULTS Participants achieved enhanced accuracy in reaching the target at the Monro foramen after practicing with the system. Additional metrics revealed significantly better trajectories after the training. CONCLUSION The study confirms the feasibility of AR as a training tool. This motivates future work towards standardising new educative methodologies in neurosurgery.
Collapse
Affiliation(s)
- César F Domínguez-Velasco
- Applied Sciences and Technology Institute ICAT, National Autonomous University of Mexico UNAM, Ciudad Universitaria, Mexico City, Mexico
- Research & Technology Development, ICAT UNAM-General Hospital of Mexico "Dr. Eduardo Liceaga" (HGMEL), Mexico City, Mexico
| | - Isaac E Tello-Mata
- Neurology & Neurosurgery National Institute "Dr. Manuel Velasco", Mexico City, Mexico
| | | | - Adriana Martínez-Hernández
- Applied Sciences and Technology Institute ICAT, National Autonomous University of Mexico UNAM, Ciudad Universitaria, Mexico City, Mexico
- Research & Technology Development, ICAT UNAM-General Hospital of Mexico "Dr. Eduardo Liceaga" (HGMEL), Mexico City, Mexico
| | | | - Juan S Pérez-Lomelí
- Applied Sciences and Technology Institute ICAT, National Autonomous University of Mexico UNAM, Ciudad Universitaria, Mexico City, Mexico
- Research & Technology Development, ICAT UNAM-General Hospital of Mexico "Dr. Eduardo Liceaga" (HGMEL), Mexico City, Mexico
| | - Miguel A Padilla-Castañeda
- Applied Sciences and Technology Institute ICAT, National Autonomous University of Mexico UNAM, Ciudad Universitaria, Mexico City, Mexico
- Research & Technology Development, ICAT UNAM-General Hospital of Mexico "Dr. Eduardo Liceaga" (HGMEL), Mexico City, Mexico
| |
Collapse
|
6
|
Kuber PM, Rashedi E. Alterations in Physical Demands During Virtual/Augmented Reality-Based Tasks: A Systematic Review. Ann Biomed Eng 2023; 51:1910-1932. [PMID: 37486385 DOI: 10.1007/s10439-023-03292-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023]
Abstract
The digital world has recently experienced a swift rise in worldwide popularity due to Virtual (VR) and Augmented Reality (AR) devices. However, concrete evidence about the effects of VR/AR devices on the physical workload imposed on the human body is lacking. We reviewed 27 articles that evaluated the physical impact of VR/AR-based tasks on the users using biomechanical sensing equipment and subjective tools. Findings revealed that movement and muscle demands (neck and shoulder) varied in seven and five studies while using VR, while in four and three studies during AR use, respectively, compared to traditional methods. User discomfort was also found in seven VR and three AR studies. Outcomes indicate that interface and interaction design, precisely target locations (gestures, viewing), design of virtual elements, and device type (location of CG as in Head-Mounted Displays) influence these alterations in neck and shoulder regions. Recommendations based on the review include developing comfortable reach envelopes for gestures, improving wearability, and studying temporal effects of repetitive movements (such as effects on fatigue and stability). Finally, a guideline is provided to assist researchers in conducting effective evaluations. The presented findings from this review could benefit designers/evaluations working towards developing more effective VR/AR products.
Collapse
Affiliation(s)
- Pranav Madhav Kuber
- Biomechanics and Ergonomics Lab, Industrial and Systems Engineering Department, Rochester Institute of Technology, 1 Lomb Memorial Dr, Rochester, NY, 14623, USA
| | - Ehsan Rashedi
- Biomechanics and Ergonomics Lab, Industrial and Systems Engineering Department, Rochester Institute of Technology, 1 Lomb Memorial Dr, Rochester, NY, 14623, USA.
| |
Collapse
|
7
|
Van Gestel F, Frantz T, Buyck F, Geens W, Neuville Q, Bruneau M, Jansen B, Scheerlinck T, Vandemeulebroucke J, Duerinck J. Neuro-oncological augmented reality planning for intracranial tumor resection. Front Neurol 2023; 14:1104571. [PMID: 36998774 PMCID: PMC10043492 DOI: 10.3389/fneur.2023.1104571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 02/14/2023] [Indexed: 03/18/2023] Open
Abstract
BackgroundBefore starting surgery for the resection of an intracranial tumor, its outlines are typically marked on the skin of the patient. This allows for the planning of the optimal skin incision, craniotomy, and angle of approach. Conventionally, the surgeon determines tumor borders using neuronavigation with a tracked pointer. However, interpretation errors can lead to important deviations, especially for deep-seated tumors, potentially resulting in a suboptimal approach with incomplete exposure. Augmented reality (AR) allows displaying of the tumor and critical structures directly on the patient, which can simplify and improve surgical preparation.MethodsWe developed an AR-based workflow for intracranial tumor resection planning deployed on the Microsoft HoloLens II, which exploits the built-in infrared-camera for tracking the patient. We initially performed a phantom study to assess the accuracy of the registration and tracking. Following this, we evaluated the AR-based planning step in a prospective clinical study for patients undergoing resection of a brain tumor. This planning step was performed by 12 surgeons and trainees with varying degrees of experience. After patient registration, tumor outlines were marked on the patient's skin by different investigators, consecutively using a conventional neuronavigation system and an AR-based system. Their performance in both registration and delineation was measured in terms of accuracy and duration and compared.ResultsDuring phantom testing, registration errors remained below 2.0 mm and 2.0° for both AR-based navigation and conventional neuronavigation, with no significant difference between both systems. In the prospective clinical trial, 20 patients underwent tumor resection planning. Registration accuracy was independent of user experience for both AR-based navigation and the commercial neuronavigation system. AR-guided tumor delineation was deemed superior in 65% of cases, equally good in 30% of cases, and inferior in 5% of cases when compared to the conventional navigation system. The overall planning time (AR = 119 ± 44 s, conventional = 187 ± 56 s) was significantly reduced through the adoption of the AR workflow (p < 0.001), with an average time reduction of 39%.ConclusionBy providing a more intuitive visualization of relevant data to the surgeon, AR navigation provides an accurate method for tumor resection planning that is quicker and more intuitive than conventional neuronavigation. Further research should focus on intraoperative implementations.
Collapse
Affiliation(s)
- Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- *Correspondence: Frederick Van Gestel
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
| | - Felix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Wietse Geens
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Quentin Neuville
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
| | - Thierry Scheerlinck
- Department of Orthopedic Surgery and Traumatology, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Beeldvorming en Fysische Wetenschappen (BEFY-ORTHO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- IMEC, Leuven, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Research Group Center for Neurosciences (C4N-NEUR), Vrije Universiteit Brussel (VUB), Brussels, Belgium
| |
Collapse
|
8
|
Bogomolova K, Vorstenbosch MATM, El Messaoudi I, Holla M, Hovius SER, van der Hage JA, Hierck BP. Effect of binocular disparity on learning anatomy with stereoscopic augmented reality visualization: A double center randomized controlled trial. ANATOMICAL SCIENCES EDUCATION 2023; 16:87-98. [PMID: 34894205 PMCID: PMC10078652 DOI: 10.1002/ase.2164] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 12/05/2021] [Accepted: 12/09/2021] [Indexed: 06/01/2023]
Abstract
Binocular disparity provides one of the important depth cues within stereoscopic three-dimensional (3D) visualization technology. However, there is limited research on its effect on learning within a 3D augmented reality (AR) environment. This study evaluated the effect of binocular disparity on the acquisition of anatomical knowledge and perceived cognitive load in relation to visual-spatial abilities. In a double-center randomized controlled trial, first-year (bio)medical undergraduates studied lower extremity anatomy in an interactive 3D AR environment either with a stereoscopic 3D view (n = 32) or monoscopic 3D view (n = 34). Visual-spatial abilities were tested with a mental rotation test. Anatomical knowledge was assessed by a validated 30-item written test and 30-item specimen test. Cognitive load was measured by the NASA-TLX questionnaire. Students in the stereoscopic 3D and monoscopic 3D groups performed equally well in terms of percentage correct answers (written test: 47.9 ± 15.8 vs. 49.1 ± 18.3; P = 0.635; specimen test: 43.0 ± 17.9 vs. 46.3 ± 15.1; P = 0.429), and perceived cognitive load scores (6.2 ± 1.0 vs. 6.2 ± 1.3; P = 0.992). Regardless of intervention, visual-spatial abilities were positively associated with the specimen test scores (η2 = 0.13, P = 0.003), perceived representativeness of the anatomy test questions (P = 0.010) and subjective improvement in anatomy knowledge (P < 0.001). In conclusion, binocular disparity does not improve learning anatomy. Motion parallax should be considered as another important depth cue that contributes to depth perception during learning in a stereoscopic 3D AR environment.
Collapse
Affiliation(s)
- Katerina Bogomolova
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | | | - Inssaf El Messaoudi
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Micha Holla
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Steven E. R. Hovius
- Department of Plastic and Reconstructive SurgeryRadboud University Medical CenterNijmegenthe Netherlands
| | - Jos A. van der Hage
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | - Beerend P. Hierck
- Department of Anatomy and PhysiologyClinical Sciences, Veterinary Medicine FacultyUtrechtthe Netherlands
| |
Collapse
|
9
|
Boaro A, Moscolo F, Feletti A, Polizzi G, Nunes S, Siddi F, Broekman M, Sala F. Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN & SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
Abstract
Introduction The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations. Results The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
Collapse
Affiliation(s)
- A. Boaro
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Moscolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - A. Feletti
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - G.M.V. Polizzi
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - S. Nunes
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Siddi
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
| | - M.L.D. Broekman
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
- Department of Neurosurgery, Leiden University Medical Center, Leiden, Zuid-Holland, the Netherlands
| | - F. Sala
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| |
Collapse
|
10
|
Iop A, El-Hajj VG, Gharios M, de Giorgio A, Monetti FM, Edström E, Elmi-Terander A, Romero M. Extended Reality in Neurosurgical Education: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:6067. [PMID: 36015828 PMCID: PMC9414210 DOI: 10.3390/s22166067] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/06/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Surgical simulation practices have witnessed a rapid expansion as an invaluable approach to resident training in recent years. One emerging way of implementing simulation is the adoption of extended reality (XR) technologies, which enable trainees to hone their skills by allowing interaction with virtual 3D objects placed in either real-world imagery or virtual environments. The goal of the present systematic review is to survey and broach the topic of XR in neurosurgery, with a focus on education. Five databases were investigated, leading to the inclusion of 31 studies after a thorough reviewing process. Focusing on user performance (UP) and user experience (UX), the body of evidence provided by these 31 studies showed that this technology has, in fact, the potential of enhancing neurosurgical education through the use of a wide array of both objective and subjective metrics. Recent research on the topic has so far produced solid results, particularly showing improvements in young residents, compared to other groups and over time. In conclusion, this review not only aids to a better understanding of the use of XR in neurosurgical education, but also highlights the areas where further research is entailed while also providing valuable insight into future applications.
Collapse
Affiliation(s)
- Alessandro Iop
- Department of Neurosurgery, Karolinska University Hospital, 141 86 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77 Stockholm, Sweden
- KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
| | - Victor Gabriel El-Hajj
- Department of Neurosurgery, Karolinska University Hospital, 141 86 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Maria Gharios
- Department of Neurosurgery, Karolinska University Hospital, 141 86 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Andrea de Giorgio
- SnT—Interdisciplinary Center for Security, Reliability and Trust, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg
| | | | - Erik Edström
- Department of Neurosurgery, Karolinska University Hospital, 141 86 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Adrian Elmi-Terander
- Department of Neurosurgery, Karolinska University Hospital, 141 86 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Mario Romero
- KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
| |
Collapse
|
11
|
Exploitation of Emerging Technologies and Advanced Networks for a Smart Healthcare System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125859] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Current medical methods still confront numerous limitations and barriers to detect and fight against illnesses and disorders. The introduction of emerging technologies in the healthcare industry is anticipated to enable novel medical techniques for an efficient and effective smart healthcare system. Internet of Things (IoT), Wireless Sensor Networks (WSN), Big Data Analytics (BDA), and Cloud Computing (CC) can play a vital role in the instant detection of illnesses, diseases, viruses, or disorders. Complicated techniques such as Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) could provide acceleration in drug and antibiotics discovery. Moreover, the integration of visualization techniques such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) with Tactile Internet (TI), can be applied from the medical staff to provide the most accurate diagnosis and treatment for the patients. A novel system architecture, which combines several future technologies, is proposed in this paper. The objective is to describe the integration of a mixture of emerging technologies in assistance with advanced networks to provide a smart healthcare system that may be established in hospitals or medical centers. Such a system will be able to deliver immediate and accurate data to the medical stuff in order to aim them in order to provide precise patient diagnosis and treatment.
Collapse
|
12
|
Liu J, Qian K, Qin Z, Alshehri MD, Li Q, Tai Y. Cloud computing-enabled IIOT system for neurosurgical simulation using augmented reality data access. Exp Eye Res 2022; 220:109085. [DOI: 10.1016/j.exer.2022.109085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/15/2022] [Accepted: 04/13/2022] [Indexed: 12/18/2022]
|
13
|
Liu X, Sanchez Perdomo YP, Zheng B, Duan X, Zhang Z, Zhang D. When medical trainees encountering a performance difficulty: evidence from pupillary responses. BMC MEDICAL EDUCATION 2022; 22:191. [PMID: 35305623 PMCID: PMC8934497 DOI: 10.1186/s12909-022-03256-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Medical trainees are required to learn many procedures following instructions to improve their skills. This study aims to investigate the pupillary response of trainees when they encounter moment of performance difficulty (MPD) during skill learning. Detecting the moment of performance difficulty is essential for educators to assist trainees when they need it. METHODS Eye motions were recorded while trainees practiced the thoracostomy procedure in the simulation model. To make pupillary data comparable among trainees, we proposed the adjusted pupil size (APS) normalizing pupil dilation for each trainee in their entire procedure. APS variables including APS, maxAPS, minAPS, meanAPS, medianAPS, and max interval indices were compared between easy and difficult subtasks; the APSs were compared among the three different performance situations, the moment of normal performance (MNP), MPD, and moment of seeking help (MSH). RESULTS The mixed ANOVA revealed that the adjusted pupil size variables, such as the maxAPS, the minAPS, the meanAPS, and the medianAPS, had significant differences between performance situations. Compared to MPD and MNP, pupil size was reduced during MSH. Trainees displayed a smaller accumulative frequency of APS during difficult subtask when compared to easy subtasks. CONCLUSIONS Results from this project suggest that pupil responses can be a good behavioral indicator. This study is a part of our research aiming to create an artificial intelligent system for medical trainees with automatic detection of their performance difficulty and delivering instructional messages using augmented reality technology.
Collapse
Affiliation(s)
- Xin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| | - Yerly Paola Sanchez Perdomo
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada.
- Department of Surgery, Faculty of Medicine and Dentistry, 162 Heritage Medical Research Centre, University of Alberta, 8440 112 St. NW. Edmonton, Alberta, T6G 2E1, Canada.
| | - Xiaoqin Duan
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Rehabilitation Medicine, Second Hospital of Jilin University, Changchun, Jilin, 130041, China
| | - Zhongshi Zhang
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, AB, T6G 2E9, Canada
| | - Dezheng Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| |
Collapse
|
14
|
In Situ Visualization for 3D Ultrasound-Guided Interventions with Augmented Reality Headset. Bioengineering (Basel) 2021; 8:bioengineering8100131. [PMID: 34677204 PMCID: PMC8533537 DOI: 10.3390/bioengineering8100131] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/16/2021] [Accepted: 09/21/2021] [Indexed: 12/03/2022] Open
Abstract
Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.
Collapse
|
15
|
Montemurro N, Condino S, Cattari N, D’Amato R, Ferrari V, Cutolo F. Augmented Reality-Assisted Craniotomy for Parasagittal and Convexity En Plaque Meningiomas and Custom-Made Cranio-Plasty: A Preliminary Laboratory Report. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18199955. [PMID: 34639256 PMCID: PMC8507881 DOI: 10.3390/ijerph18199955] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/10/2021] [Accepted: 09/17/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND This report discusses the utility of a wearable augmented reality platform in neurosurgery for parasagittal and convexity en plaque meningiomas with bone flap removal and custom-made cranioplasty. METHODS A real patient with en plaque cranial vault meningioma with diffuse and extensive dural involvement, extracranial extension into the calvarium, and homogeneous contrast enhancement on gadolinium-enhanced T1-weighted MRI, was selected for this case study. A patient-specific manikin was designed starting with the segmentation of the patient's preoperative MRI images to simulate a craniotomy procedure. Surgical planning was performed according to the segmented anatomy, and customized bone flaps were designed accordingly. During the surgical simulation stage, the VOSTARS head-mounted display was used to accurately display the planned craniotomy trajectory over the manikin skull. The precision of the craniotomy was assessed based on the evaluation of previously prepared custom-made bone flaps. RESULTS A bone flap with a radius 0.5 mm smaller than the radius of an ideal craniotomy fitted perfectly over the performed craniotomy, demonstrating an error of less than ±1 mm in the task execution. The results of this laboratory-based experiment suggest that the proposed augmented reality platform helps in simulating convexity en plaque meningioma resection and custom-made cranioplasty, as carefully planned in the preoperative phase. CONCLUSIONS Augmented reality head-mounted displays have the potential to be a useful adjunct in tumor surgical resection, cranial vault lesion craniotomy and also skull base surgery, but more study with large series is needed.
Collapse
Affiliation(s)
- Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
- Correspondence:
| | - Sara Condino
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Nadia Cattari
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
- Department of Translational Research, University of Pisa, 56100 Pisa, Italy
| | - Renzo D’Amato
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, 56100 Pisa, Italy; (S.C.); (R.D.); (V.F.); (F.C.)
- EndoCAS Center for Computer-Assisted Surgery, 56100 Pisa, Italy;
| |
Collapse
|
16
|
Teodoro-Vite S, Pérez-Lomelí JS, Domínguez-Velasco CF, Hernández-Valencia AF, Capurso-García MA, Padilla-Castañeda MA. A High-Fidelity Hybrid Virtual Reality Simulator of Aneurysm Clipping Repair With Brain Sylvian Fissure Exploration for Vascular Neurosurgery Training. Simul Healthc 2021; 16:285-294. [PMID: 32701862 DOI: 10.1097/sih.0000000000000489] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
INTRODUCTION Microsurgery clipping is one of the most challenging surgical interventions in neurosurgery. The opportunities to train residents are scarce, but the need for accumulating practice is mandatory. New simulating tools are needed for skill learning. METHODS The design, implementation, and assessment of a new hybrid aneurysm clipping simulator are presented. It consists of an ergonomic workstation with a patient head mannequin and a physics-based virtual reality simulation with bimanual haptic feedback. The simulator recreates scenarios of microsurgery from the patient fixation and the exploration of the brain lobes through Sylvian fissure and vascular structures to the aneurysm clipping. Skill metrics were introduced, including monitoring of gestures movements, exerted forces, tissue displacements, and precision in clipping. RESULTS Two experimental conditions were tested: (1) simple clipping without brain tissue exploration and (2) clipping the aneurysm with brain Sylvian fissure exploration. Differences in the bimanual gestures were observed between both conditions. The quantitative measurements of tissue displacement of the brain lobes exhibited more tissue retrieval for the surgical gestures of neurosurgeons. Appraisal with questionnaires showed positive scores by neurosurgeons in all items evaluating the usability and realism of the simulator. CONCLUSIONS The simulator was well accepted and feasible for training purposes. The analysis of the interactions with virtual tissues offers information to establish differential and common patterns between tested groups and thus useful metrics for skill evaluation of practitioners. Future work can lead to other tasks during the intervention and the inclusion of more clinical cases.
Collapse
Affiliation(s)
- Sergio Teodoro-Vite
- From the Applied Sciences and Technology Institute (ST-V, JSP, CFD, MAP-C), National Autonomous University of Mexico, Ciudad Universitaria; Neurology and Neurosurgery Service Unit (AFH-V), General Hospital of Mexico "Dr. Eduardo Liceaga"; Directorate of Education and Training in Health, General Hospital of Mexico "Dr. Eduardo Liceaga" (MAC-G), Mexico City, Mexico
| | | | | | | | | | | |
Collapse
|
17
|
Qi Z, Li Y, Xu X, Zhang J, Li F, Gan Z, Xiong R, Wang Q, Zhang S, Chen X. Holographic mixed-reality neuronavigation with a head-mounted device: technical feasibility and clinical application. Neurosurg Focus 2021; 51:E22. [PMID: 34333462 DOI: 10.3171/2021.5.focus21175] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/13/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The authors aimed to evaluate the technical feasibility of a mixed-reality neuronavigation (MRN) system with a wearable head-mounted device (HMD) and to determine its clinical application and accuracy. METHODS A semiautomatic registration MRN system on HoloLens smart glasses was developed and tested for accuracy and feasibility. Thirty-seven patients with intracranial lesions were prospectively identified. For each patient, multimodal imaging-based holograms of lesions, markers, and surrounding eloquent structures were created and then imported to the MRN HMD. After a point-based registration, the holograms were projected onto the patient's head and observed through the HMD. The contour of the holograms was compared with standard neuronavigation (SN). The projection of the lesion boundaries perceived by the neurosurgeon on the patient's scalp was then marked with MRN and SN. The distance between the two contours generated by MRN and SN was measured so that the accuracy of MRN could be assessed. RESULTS MRN localization was achieved in all patients. The mean additional time required for MRN was 36.3 ± 6.3 minutes, in which the mean registration time was 2.6 ± 0.9 minutes. A trend toward a shorter time required for preparation was observed with the increase of neurosurgeon experience with the MRN system. The overall median deviation was 4.1 mm (IQR 3.0 mm-4.7 mm), and 81.1% of the lesions localized by MRN were found to be highly consistent with SN (deviation < 5.0 mm). There was a significant difference between the supine position and the prone position (3.7 ± 1.1 mm vs 5.4 ± 0.9 mm, p = 0.001). The magnitudes of deviation vectors did not correlate with lesion volume (p = 0.126) or depth (p = 0.128). There was no significant difference in additional operating time between different operators (37.4 ± 4.8 minutes vs 34.6 ± 4.8 minutes, p = 0.237) or in localization deviation (3.7 ± 1.0 mm vs 4.6 ± 1.5 mm, p = 0.070). CONCLUSIONS This study provided a complete set of a clinically applicable workflow on an easy-to-use MRN system using a wearable HMD, and has shown its technical feasibility and accuracy. Further development is required to improve the accuracy and clinical efficacy of this system.
Collapse
Affiliation(s)
- Ziyu Qi
- 1Department of Neurosurgery, Chinese PLA General Hospital; and.,2School of Medicine, Nankai University, Tianjin, China
| | - Ye Li
- 3Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing; and
| | - Xinghua Xu
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Jiashu Zhang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Fangye Li
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Zhichao Gan
- 1Department of Neurosurgery, Chinese PLA General Hospital; and.,2School of Medicine, Nankai University, Tianjin, China
| | - Ruochu Xiong
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Qun Wang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Shiyu Zhang
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| | - Xiaolei Chen
- 1Department of Neurosurgery, Chinese PLA General Hospital; and
| |
Collapse
|
18
|
Liu K, Gao Y, Abdelrehem A, Zhang L, Chen X, Xie L, Wang X. Augmented reality navigation method for recontouring surgery of craniofacial fibrous dysplasia. Sci Rep 2021; 11:10043. [PMID: 33976233 PMCID: PMC8113548 DOI: 10.1038/s41598-021-88860-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 04/13/2021] [Indexed: 11/09/2022] Open
Abstract
The objective of this study is to introduce the application of augmented reality (AR) navigation system developed by the authors in recontouring surgery of craniofacial fibrous dysplasia. Five consecutive patients with craniofacial fibrous dysplasia were enrolled. Through three-dimensional (3D) simulation, a virtual plan was designed to reconstruct the normal anatomical contour of the deformed region. Surgical recontouring was achieved with the assistance of the AR navigation system. The accuracy of the surgical procedure was assessed by superimposing the post-operative 3D craniomaxillofacial model onto the virtual plan. The pre-operative preparation time and operation time were also counted. In all patients, AR navigation was performed successfully, with a mean ± SD of the errors of 1.442 ± 0.234 mm. The operative time of the patients ranged from 60 to 80 min. The pre-operative preparation time was 20 min for each patient. All the patients showed uneventful healing without any complications, in addition to satisfaction with the post-operative aesthetics. Using our AR navigation system in recontouring surgery can provide surgeons with a comprehensive and intuitive view of the recontouring border, as well as the depth, in real time. This method could improve the efficiency and safety of craniofacial fibrous dysplasia recontouring procedures.
Collapse
Affiliation(s)
- Kai Liu
- Department of Oral and Craniomaxillofacial Surgery, Shanghai 9Th People's Hospital, Shanghai Jiaotong University College of Medicine, Shanghai, China.,Shanghai Key Laboratory of Stomatology, Shanghai, China
| | - Yuan Gao
- Institute of Forming Technology and Equipment, Shanghai JiaoTong University, Shanghai, China
| | - Ahmed Abdelrehem
- Department of Craniomaxillofacial and Plastic Surgery, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Lei Zhang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai 9Th People's Hospital, Shanghai Jiaotong University College of Medicine, Shanghai, China.,Shanghai Key Laboratory of Stomatology, Shanghai, China
| | - Xi Chen
- Department of Oral and Craniomaxillofacial Surgery, Shanghai 9Th People's Hospital, Shanghai Jiaotong University College of Medicine, Shanghai, China
| | - Le Xie
- Institute of Forming Technology and Equipment, Shanghai JiaoTong University, Shanghai, China. .,Institute of Medical Robot, Shanghai JiaoTong University, Shanghai, China. .,Quanzhou Normal University, Fujian, China.
| | - Xudong Wang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai 9Th People's Hospital, Shanghai Jiaotong University College of Medicine, Shanghai, China. .,Shanghai Key Laboratory of Stomatology, Shanghai, China.
| |
Collapse
|
19
|
Use of augmented reality navigation to optimise the surgical management of craniofacial fibrous dysplasia. Br J Oral Maxillofac Surg 2021; 60:162-167. [PMID: 34930644 DOI: 10.1016/j.bjoms.2021.03.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/25/2021] [Indexed: 11/20/2022]
Abstract
The aim of this study was to apply an augmented reality (AR) navigation technique based on a head- mounted display in the treatment of craniofacial fibrous dysplasia and to explore the feasibility and the value of AR in craniofacial surgery. With preoperative planning and three-dimensional simulation, the normal anatomical contours of the deformed area were recreated by superimposing the unaffected side on to the affected side. We completed the recontouring procedures in real time with the aid of an AR navigation system. The surgical outcome was assessed by superimposing the postoperative computed tomographic images on to the preoperative virtual plan. The preparation and operation times were recorded. With intraoperative AR guidance, facial bone recontouring was performed uneventfully in all cases. The mean (SD) discrepancy between the actual surgical reduction and preoperative planning was 1.036 (0.081) mm (range: 0.913 (0.496) to 1.165 (0.498) mm). The operation time ranged from 50 to 80 minutes, with an average of 66.4 minutes. The preoperative preparation time ranged from 26 to 36 minutes, with a mean of 29.6 minutes. AR navigation-assisted facial bone recontouring is a valuable treatment modality in managing craniomaxillofacial fibrous dysplasia and shows benefits in improving the efficiency and safety of this complicated procedure.
Collapse
|
20
|
Cercenelli L, Carbone M, Condino S, Cutolo F, Marcelli E, Tarsitano A, Marchetti C, Ferrari V, Badiali G. The Wearable VOSTARS System for Augmented Reality-Guided Surgery: Preclinical Phantom Evaluation for High-Precision Maxillofacial Tasks. J Clin Med 2020; 9:jcm9113562. [PMID: 33167432 PMCID: PMC7694536 DOI: 10.3390/jcm9113562] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/29/2020] [Accepted: 11/03/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. METHODS An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. RESULTS VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory's length was within ±0.5 mm accuracy. CONCLUSIONS Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
- Correspondence: ; Tel.: +39-0516364603
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Emanuela Marcelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| |
Collapse
|
21
|
Švaco M, Stiperski I, Dlaka D, Šuligoj F, Jerbić B, Chudy D, Raguž M. Stereotactic Neuro-Navigation Phantom Designs: A Systematic Review. Front Neurorobot 2020; 14:549603. [PMID: 33192433 PMCID: PMC7644893 DOI: 10.3389/fnbot.2020.549603] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 09/16/2020] [Indexed: 11/28/2022] Open
Abstract
Diverse stereotactic neuro-navigation systems are used daily in neurosurgery and novel systems are continuously being developed. Prior to clinical implementation of new surgical tools, methods or instruments, in vitro experiments on phantoms should be conducted. A stereotactic neuro-navigation phantom denotes a rigid or deformable structure resembling the cranium with the intracranial area. The use of phantoms is essential for the testing of complete procedures and their workflows, as well as for the final validation of the application accuracy. The aim of this study is to provide a systematic review of stereotactic neuro-navigation phantom designs, to identify their most relevant features, and to identify methodologies for measuring the target point error, the entry point error, and the angular error (α). The literature on phantom designs used for evaluating the accuracy of stereotactic neuro-navigation systems, i.e., robotic navigation systems, stereotactic frames, frameless navigation systems, and aiming devices, was searched. Eligible articles among the articles written in English in the period 2000-2020 were identified through the electronic databases PubMed, IEEE, Web of Science, and Scopus. The majority of phantom designs presented in those articles provide a suitable methodology for measuring the target point error, while there is a lack of objective measurements of the entry point error and angular error. We identified the need for a universal phantom design, which would be compatible with most common imaging techniques (e.g., computed tomography and magnetic resonance imaging) and suitable for simultaneous measurement of the target point, entry point, and angular errors.
Collapse
Affiliation(s)
- Marko Švaco
- Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
| | - Ivan Stiperski
- Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia
| | - Domagoj Dlaka
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
| | - Filip Šuligoj
- Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
| | - Bojan Jerbić
- Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
| | - Darko Chudy
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
- Croatian Institute for Brain Research, School of Medicine University of Zagreb, Zagreb, Croatia
- Department of Surgery, School of Medicine University of Zagreb, Zagreb, Croatia
| | - Marina Raguž
- Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia
- Croatian Institute for Brain Research, School of Medicine University of Zagreb, Zagreb, Croatia
- Department of Anatomy and Clinical Anatomy, School of Medicine University of Zagreb, Zagreb, Croatia
| |
Collapse
|
22
|
Liu T, Tai Y, Zhao C, Wei L, Zhang J, Pan J, Shi J. Augmented reality in neurosurgical navigation: a survey. Int J Med Robot 2020; 16:e2160. [PMID: 32890440 DOI: 10.1002/rcs.2160] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/19/2020] [Accepted: 08/29/2020] [Indexed: 11/12/2022]
Abstract
BACKGROUND Neurosurgery has exceptionally high requirements for minimally invasive and safety. This survey attempts to analyze the practical application of AR in neurosurgical navigation. Also, this survey describes future trends in augmented reality neurosurgical navigation systems. METHODS In this survey, we searched related keywords "augmented reality", "virtual reality", "neurosurgery", "surgical simulation", "brain tumor surgery", "neurovascular surgery", "temporal bone surgery", and "spinal surgery" through Google Scholar, World Neurosurgery, PubMed and Science Direct. We collected 85 articles published over the past five years in areas related to this survey. RESULTS Detailed study has been conducted on the application of AR in neurosurgery and found that AR is constantly improving the overall efficiency of doctor training and treatment, which can help neurosurgeons learn and practice surgical procedures with zero risks. CONCLUSIONS Neurosurgical navigation is essential in neurosurgery. Despite certain technical limitations, it is still a necessary tool for the pursuit of maximum security and minimal intrusiveness. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Tao Liu
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Yonghang Tai
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Chengming Zhao
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Lei Wei
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC, Australia
| | - Jun Zhang
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Junsheng Shi
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| |
Collapse
|
23
|
Sukegawa S, Kanno T, Matsuo A, Furuki Y. Surgical Strategy of Endoscopically assisted Periradicular Surgery Using Novel Head-mounted Display System. Ann Maxillofac Surg 2020; 10:186-189. [PMID: 32855938 PMCID: PMC7433984 DOI: 10.4103/ams.ams_83_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 12/28/2019] [Accepted: 03/28/2020] [Indexed: 11/06/2022] Open
Abstract
A novel head-mounted display offers high quality of endoscopic imagery in front of the eyes, irrespective of the head position. We present an application of the head-mounted display system as a personal integrated multi-image monitoring system in endoscopically assisted periradicular surgery. Our head-mounted display system displayed multiple forms of information as integrated, sharp, high-definition endoscope, biological monitor, and X-ray image (such as panoramic and computed tomography images) synchronously using a picture in picture. In addition, this system can cope with both the endoscopic field of view and the direct field of view. While monitoring the patient's general condition with a head-mounted display, the surgery was performed with endoscopic animation. We could also switch to the direct surgical field and endoscopic field of view smoothly without moving the head and without surgical interference. The availability of a head-mounted display system during endoscopically assisted periradicular surgery enabled the provision of a comfortable and appropriate surgical environment for the surgeon.
Collapse
Affiliation(s)
- Shintaro Sukegawa
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, Takamatsu, Kagawa, Japan
| | - Takahiro Kanno
- Department of Oral and Maxillofacial Surgery, Shimane University Faculty of Medicine, Shimane, Japan
| | - Akira Matsuo
- Department of Oral and Maxillofacial Surgery, Tokyo Medical University, Tokyo, Japan
| | - Yoshihiko Furuki
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, Takamatsu, Kagawa, Japan
| |
Collapse
|
24
|
V. V, Voggu AR, Arumalla K, Doshi R, Ramkumar A, Mahadevan A, Rao M. Mythri 1.0—Progress of an Indian Surgical Robot. INDIAN JOURNAL OF NEUROSURGERY 2020. [DOI: 10.1055/s-0040-1710108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
Abstract
AbstractNeurosurgical procedures are performed using operating microscopes. The technology of most microscopes has not changed much over the past 60 years. The National Institute of Mental Health and Neurosciences and International Institute of Information Technology based at Bengaluru have embarked on joint collaboration for developing robot for neurosurgical applications. As a working prototype, robotic microscope Mythri 1.0 has been developed. An overview of the development process, working, and features of the device is presented in the article.
Collapse
Affiliation(s)
- Vikas V.
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India
| | - Aravind Reddy Voggu
- Department of Electronics and Communication Engineering, International Institute of Information Technology, Bangalore, Karnataka, India
| | - Kirit Arumalla
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India
| | - Ronak Doshi
- Department of Electronics and Communication Engineering, International Institute of Information Technology, Bangalore, Karnataka, India
| | - Aravind Ramkumar
- Department of Electronics and Communication Engineering, International Institute of Information Technology, Bangalore, Karnataka, India
| | - Anita Mahadevan
- Department of Neuropathology, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India
| | - Madhav Rao
- Department of Electronics and Communication Engineering, International Institute of Information Technology, Bangalore, Karnataka, India
| |
Collapse
|
25
|
Enhancing Reality: A Systematic Review of Augmented Reality in Neuronavigation and Education. World Neurosurg 2020; 139:186-195. [DOI: 10.1016/j.wneu.2020.04.043] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 04/06/2020] [Indexed: 12/11/2022]
|
26
|
Viehöfer AF, Wirth SH, Zimmermann SM, Jaberg L, Dennler C, Fürnstahl P, Farshad M. Augmented reality guided osteotomy in hallux Valgus correction. BMC Musculoskelet Disord 2020; 21:438. [PMID: 32631342 PMCID: PMC7336637 DOI: 10.1186/s12891-020-03373-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 05/27/2020] [Indexed: 11/10/2022] Open
Abstract
Background An optimal osteotomy angle avoids shortening of the first metatarsal bone after hallux valgus surgery and therefore reduces the risk of transfer-metatarsalgia. The purpose of the present ex-vivo study was to investigate whether augmented reality (AR) would improve accuracy of the distal osteotomy during hallux valgus surgery. Methods Distal osteotomies of the first metatarsals were performed on a foot model by two surgeons with different levels of surgical experience each with (AR, n = 15 × 2) or without (controls, n = 15 × 2) overlay of a hologram depicting an angle of osteotomy perpendicular to the second metatarsal. Subsequently, the deviation of the osteotomy angle in the transverse plane was analyzed. Results Overall, AR decreased the extent of deviation and the AR guided osteotomies were more accurate (4.9 ± 4.2°) compared to the freehand cuts (6.7 ± 6.1°) by tendency (p = 0.2). However, while the inexperienced surgeon performed more accurate osteotomies with AR with a mean angle of 6.4 ± 3.5° compared to freehand 10.5 ± 5.5° (p = 0.02), no significant difference was noticed for the experienced surgeon with an osteotomy angle of around 3° in both cases. Conclusion This pilot-study suggests that AR guided osteotomies can potentially improve accuracy during hallux valgus correction, particularly for less experienced surgeons.
Collapse
Affiliation(s)
- Arnd Fredrik Viehöfer
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland.
| | - Stephan Hermann Wirth
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland.,Computer-Assisted Research and Development Group, Balgrist University Hospital, Zurich, Switzerland
| | - Stefan Michael Zimmermann
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland
| | - Laurenz Jaberg
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland
| | - Cyrill Dennler
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland
| | - Philipp Fürnstahl
- Computer-Assisted Research and Development Group, Balgrist University Hospital, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, Forchstrasse 340, 8008, Zürich, Switzerland
| |
Collapse
|
27
|
Rychen J, Goldberg J, Raabe A, Bervini D. Augmented Reality in Superficial Temporal Artery to Middle Cerebral Artery Bypass Surgery: Technical Note. Oper Neurosurg (Hagerstown) 2020; 18:444-450. [PMID: 31232435 DOI: 10.1093/ons/opz176] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 04/06/2019] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Augmented reality (AR) applied to surgery refers to the virtual superimposition of computer-generated anatomical information on the surgical field. AR assistance in extracranial-intracranial (EC-IC) bypass revascularization surgery has been reported to be a helpful technical adjunct. OBJECTIVE To describe our experience of using AR in superficial temporal artery to middle cerebral artery (STA-MCA) bypass surgery with the additional implementation of new technical processes to improve the safety and efficacy of the procedure. METHODS Data sets from preoperative imaging were loaded and fused in a single 3-dimensional matrix using the neuronavigation system. Anatomical structures of interest (the STA, a selected M4 branch of the MCA, the middle meningeal artery [MMA], and the primary motor cortex [PMC]) were segmented. After the registration of the patient and the operating microscope, the structures of interest were projected into the eyepiece of the microscope and superimposed onto the patient's head, creating the AR surgical field. RESULTS AR was shown to be useful in patients undergoing EC-IC bypass revascularization, mostly during the following 4 surgical steps: (1) microsurgical dissection of the donor vessel (STA); (2) tailoring the craniotomy above the recipient vessel (M4 branch of the MCA); (3) tailoring the craniotomy to spare the MMA; and (4) tailoring the craniotomy and the anastomosis to spare the PMC. CONCLUSION AR assistance in EC-IC bypass revascularization is a versatile technical adjunct for helping surgeons to ensure the safety and efficacy of the procedure.
Collapse
Affiliation(s)
- Jonathan Rychen
- Department of Neurosurgery, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland
| | - Johannes Goldberg
- Department of Neurosurgery, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland
| | - Andreas Raabe
- Department of Neurosurgery, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland
| | - David Bervini
- Department of Neurosurgery, Inselspital, Bern University Hospital and University of Bern, Bern, Switzerland
| |
Collapse
|
28
|
Catapano JS, Fredrickson VL. Commentary: Augmented Reality in Superficial Temporal Artery to Middle Cerebral Artery Bypass Surgery: Technical Note. Oper Neurosurg (Hagerstown) 2020; 18:E108-E109. [PMID: 31529066 DOI: 10.1093/ons/opz263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 06/08/2019] [Indexed: 11/14/2022] Open
Affiliation(s)
- Joshua S Catapano
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona
| | - Vance L Fredrickson
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona
| |
Collapse
|
29
|
Condino S, Fida B, Carbone M, Cercenelli L, Badiali G, Ferrari V, Cutolo F. Wearable Augmented Reality Platform for Aiding Complex 3D Trajectory Tracing. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1612. [PMID: 32183212 PMCID: PMC7146390 DOI: 10.3390/s20061612] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/05/2020] [Accepted: 03/11/2020] [Indexed: 01/28/2023]
Abstract
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Benish Fida
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Laura Cercenelli
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| |
Collapse
|
30
|
Ambiguity-Free Optical-Inertial Tracking for Augmented Reality Headsets. SENSORS 2020; 20:s20051444. [PMID: 32155808 PMCID: PMC7085738 DOI: 10.3390/s20051444] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/04/2020] [Accepted: 03/04/2020] [Indexed: 01/19/2023]
Abstract
The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.
Collapse
|
31
|
Lee S, Shim S, Ha HG, Lee H, Hong J. Simultaneous Optimization of Patient-Image Registration and Hand-Eye Calibration for Accurate Augmented Reality in Surgery. IEEE Trans Biomed Eng 2020; 67:2669-2682. [PMID: 31976878 DOI: 10.1109/tbme.2020.2967802] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.
Collapse
|
32
|
Smartphone Augmented Reality CT-Based Platform for Needle Insertion Guidance: A Phantom Study. Cardiovasc Intervent Radiol 2020; 43:756-764. [DOI: 10.1007/s00270-019-02403-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 12/21/2019] [Indexed: 01/06/2023]
|
33
|
Combination of CAD/CAM and Augmented Reality in Free Fibula Bone Harvest. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2019; 7:e2510. [PMID: 31942302 PMCID: PMC6908345 DOI: 10.1097/gox.0000000000002510] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 08/30/2019] [Indexed: 12/03/2022]
Abstract
Supplemental Digital Content is available in the text. The CAD/CAM technology for mandibular reconstruction has improved the results in terms of outcomes in restoring mandibular complex defects. Augmented reality (AR) represents an evolution of the navigation-assisted surgery. This technology merges the images of the virtual planning with the anatomy of the patient, representing in this way an enhanced scene for the surgeon’s eye. AR can also display in a single scene additional information for the surgeon. Despite of classical navigation, this scenario can be obtained with marker-less registration method, without using reference points or fiducial markers. This technologic evolution together with the large use in our experience of CAD/CAM protocol for mandibular reconstruction we developed this feasibility study to evaluate the possibility of using a marker-less image registration system. Moreover, we tried to evaluate the overlaying of the virtual planning and its reproducibility using AR. We performed a case series of 3 consecutive patients who underwent mandibular reconstruction using AR-assisted fibular free flap harvesting applying our digital workflow. Once launched, the mobile app installed on our tablet, the registration is performed according to a shape recognition system of the leg of the patient, rendering in real time a superimposition of the anatomy of the bony, vascular, and skin of the patient and also the surgical planning of the reconstruction. AR-assisted fibular free flap harvesting was performed. We believe that AR can be a prospective improving technology for mandibular complex reconstruction.
Collapse
|
34
|
Bárdosi Z, Plattner C, Özbek Y, Hofmann T, Milosavljevic S, Schartinger V, Freysinger W. CIGuide: in situ augmented reality laser guidance. Int J Comput Assist Radiol Surg 2019; 15:49-57. [PMID: 31506882 PMCID: PMC6949325 DOI: 10.1007/s11548-019-02066-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 09/02/2019] [Indexed: 11/28/2022]
Abstract
PURPOSE : A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools. METHODS : Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery. RESULTS : System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon. CONCLUSION : A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.
Collapse
Affiliation(s)
| | | | - Yusuf Özbek
- Medical University Innsbruck, Innsbruck, Austria
| | | | | | | | | |
Collapse
|
35
|
Meola A, Chang SD. Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience. Oper Neurosurg (Hagerstown) 2019; 14:E71-E72. [PMID: 29590481 DOI: 10.1093/ons/opy048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Antonio Meola
- Department of Neurosurgery Stanford University Stanford, California
| | - Steven D Chang
- Department of Neurosurgery Stanford University Stanford, California
| |
Collapse
|
36
|
Augmented Reality in Transsphenoidal Surgery. World Neurosurg 2019; 125:e873-e883. [DOI: 10.1016/j.wneu.2019.01.202] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Revised: 01/27/2019] [Accepted: 01/30/2019] [Indexed: 11/23/2022]
|
37
|
Landry EC, Yong M, Pauwels J, Chadha NK. The use of video glasses improved learning of tonsillectomy and adenoidectomy surgery: A randomized controlled trial. Int J Pediatr Otorhinolaryngol 2019; 117:12-16. [PMID: 30579065 DOI: 10.1016/j.ijporl.2018.10.039] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 10/24/2018] [Accepted: 10/24/2018] [Indexed: 10/28/2022]
Abstract
OBJECTIVE One of the most common challenges in surgical education for trainees is gaining practical experience through observing procedures in the operating room. Due to the nature of some procedures, a narrow surgical view severely limits the learning experience. Video glasses are new devices that offer the potential to project the primary surgeon's exact view to learners in real-time, allowing for an enhanced operative learning experience. STUDY DESIGN Single center randomized prospective trial. SETTING Tertiary care pediatric hospital. PARTICIPANTS Using block randomization, medical students and surgical residents observed either a tonsillectomy or adenoidectomy, either directly at table-side or by real-time video feed from the surgeon's video glasses projected to a screen in the operating room, in random order. Participants then completed a survey comparing aspects of their learning experience viewing the procedure through the video feed in comparison to direct observation. MAIN OUTCOME MEASURES Evaluating the hypothesis that video glasses provided an improved overall learning experience and a realistic simulation of the open surgical procedures tested. RESULTS 23 trainees participated in the study. Survey results demonstrated that the overall learning experience with the use of video glasses was significantly improved when compared to direct visualization (average Visual Analog Scale (VAS) score 82/100 vs. 64/100, p = 0.021). Video glasses were shown to be superior when comparing the view of the surgical field (83/100 vs. 54/100 on VAS, p < 0.001) and the ability to identify anatomical structures (79/100 vs. 56/100 on VAS, p = 0.001). The ease of following surgical steps with video glasses was also shown to be better than by direct visualization (81/100 vs. 69/100 on VAS, p = 0.039). All participants stated that video glasses closely simulated the learning environment of the real-life open procedure. CONCLUSION This study showed that the use of video glasses was beneficial for surgical education and a realistic tool for learners at varying levels of training. Video glasses may significantly improve the learning experience for procedures with a narrow field of view.
Collapse
Affiliation(s)
- Evie C Landry
- Division of Pediatric Otolaryngology-Head and Neck Surgery, BC Children's Hospital, University of British Columbia, Vancouver, Canada.
| | - Michael Yong
- Division of Pediatric Otolaryngology-Head and Neck Surgery, BC Children's Hospital, University of British Columbia, Vancouver, Canada
| | - Julie Pauwels
- Division of Pediatric Otolaryngology-Head and Neck Surgery, BC Children's Hospital, University of British Columbia, Vancouver, Canada
| | - Neil K Chadha
- Division of Pediatric Otolaryngology-Head and Neck Surgery, BC Children's Hospital, University of British Columbia, Vancouver, Canada
| |
Collapse
|
38
|
Song T, Yang C, Dianat O, Azimi E. Endodontic guided treatment using augmented reality on a head‐mounted display system. Healthc Technol Lett 2018. [DOI: 10.1049/htl.2018.5062] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Affiliation(s)
- Tianyu Song
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| | - Chenglin Yang
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| | - Omid Dianat
- Division of Endodontics, School of Dentistry University of Maryland Baltimore USA
| | - Ehsan Azimi
- Laboratory for Computational Sensing and Robotics Johns Hopkins University Baltimore USA
| |
Collapse
|
39
|
Frantz T, Jansen B, Duerinck J, Vandemeulebroucke J. Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation. Healthc Technol Lett 2018; 5:221-225. [PMID: 30464854 PMCID: PMC6222243 DOI: 10.1049/htl.2018.5079] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 09/03/2018] [Indexed: 11/20/2022] Open
Abstract
Major hurdles for Microsoft's HoloLens as a tool in medicine have been accessing tracking data, as well as a relatively high-localisation error of the displayed information; cumulatively resulting in its limited use and minimal quantification. The following work investigates the augmentation of HoloLens with the proprietary image processing SDK Vuforia, allowing integration of data from its front-facing RGB camera to provide more spatially stable holograms for neuronavigational use. Continuous camera tracking was able to maintain hologram registration with a mean perceived drift of 1.41 mm, as well as a mean sub 2-mm surface point localisation accuracy of 53%, all while allowing the researcher to walk about a test area. This represents a 68% improvement for the later and a 34% improvement for the former compared with a typical HoloLens deployment used as a control. Both represent a significant improvement on hologram stability given the current state-of-the-art, and to the best of the authors knowledge are the first example of quantified measurements when augmenting hologram stability using data from the RGB sensor.
Collapse
Affiliation(s)
- Taylor Frantz
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Pleinlaan 2, B-1050 Brussels, Belgium.,imec, Kapeldreef 75, B-3001 Leuven, Belgium
| | - Bart Jansen
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Pleinlaan 2, B-1050 Brussels, Belgium.,imec, Kapeldreef 75, B-3001 Leuven, Belgium
| | - Johnny Duerinck
- Vrije Universiteit Brussel (VUB), Department of Neurosurgery, Laarbeeklaan 101, 1090 Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Pleinlaan 2, B-1050 Brussels, Belgium.,imec, Kapeldreef 75, B-3001 Leuven, Belgium
| |
Collapse
|
40
|
Xu X, Zheng Y, Yao S, Sun G, Xu B, Chen X. A low-cost multimodal head-mounted display system for neuroendoscopic surgery. Brain Behav 2018; 8:e00891. [PMID: 29568688 PMCID: PMC5853619 DOI: 10.1002/brb3.891] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 09/26/2017] [Accepted: 11/15/2017] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. METHODS A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. RESULTS A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. CONCLUSIONS The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.
Collapse
Affiliation(s)
- Xinghua Xu
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Yi Zheng
- Department of Dermatology Beijing Chaoyang Hospital Capital Medical University Beijing China
| | - Shujing Yao
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Guochen Sun
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Bainan Xu
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Xiaolei Chen
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| |
Collapse
|
41
|
Benyoucef Y, Lesport P, Chassagneux A. The Emergent Role of Virtual Reality in the Treatment of Neuropsychiatric Disease. Front Neurosci 2017; 11:491. [PMID: 28928630 PMCID: PMC5591848 DOI: 10.3389/fnins.2017.00491] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 08/21/2017] [Indexed: 12/28/2022] Open
|