1
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Rasheed H, Dorent R, Fehrentz M, Morozov D, Kapur T, Wells WM, Golby A, Frisken S, Schnabel JA, Haouchine N. Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound. SIMPLIFYING MEDICAL ULTRASOUND : 5TH INTERNATIONAL WORKSHOP, ASMUS 2024, HELD IN CONJUNCTION WITH MICCAI 2024, MARRAKESH, MOROCCO, OCTOBER 6, 2024, PROCEEDINGS. ASMUS (WORKSHOP) (5TH : 2024 : MARRAKECH, MOROCCO) 2024; 15186:78-87. [PMID: 39736888 PMCID: PMC11682695 DOI: 10.1007/978-3-031-73647-6_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2025]
Abstract
We propose in this paper a texture-invariant 2D keypoints descriptor specifically designed for matching preoperative Magnetic Resonance (MR) images with intraoperative Ultrasound (US) images. We introduce a matching-by-synthesis strategy, where intraoperative US images are synthesized from MR images accounting for multiple MR modalities and intraoperative US variability. We build our training set by enforcing keypoints localization over all images then train a patient-specific descriptor network that learns texture-invariant discriminant features in a supervised contrastive manner, leading to robust keypoints descriptors. Our experiments on real cases with ground truth show the effectiveness of the proposed approach, outperforming the state-of-the-art methods and achieving 80.35% matching precision on average.
Collapse
Affiliation(s)
- Hassan Rasheed
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Technical University of Munich, Munich, Germany
- Helmholtz Center Munich, Munich, Germany
| | - Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Maximilian Fehrentz
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Technical University of Munich, Munich, Germany
| | - Daniil Morozov
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Technical University of Munich, Munich, Germany
| | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Julia A Schnabel
- Technical University of Munich, Munich, Germany
- Helmholtz Center Munich, Munich, Germany
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
3
|
Oya T, Kadomatsu Y, Chen-Yoshikawa TF, Nakao M. 2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation. Comput Med Imaging Graph 2024; 116:102418. [PMID: 39079410 DOI: 10.1016/j.compmedimag.2024.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 07/10/2024] [Accepted: 07/15/2024] [Indexed: 09/02/2024]
Abstract
Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.
Collapse
Affiliation(s)
- Tomoki Oya
- Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto, 606-8501, Japan
| | - Yuka Kadomatsu
- Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | | | - Megumi Nakao
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-cho, Sakyo, Kyoto, 606-8507, Japan.
| |
Collapse
|
4
|
van Doormaal JAM, van Doormaal TPC. Augmented Reality in Neurosurgery. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1462:351-374. [PMID: 39523276 DOI: 10.1007/978-3-031-64892-2_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Augmented Reality (AR) involves superimposing digital content onto the real environment. AR has evolved into a viable tool in neurosurgery, enhancing intraoperative navigation, medical education and surgical training by integrating anatomical data with the real world. Neurosurgical AR relies on several key techniques to be successful, which includes image segmentation, model rendering, AR projection, and image-to-patient registration. For each of these technical components, different solutions exist, with each having their own advantages and limitations.Intraoperative AR applications cover diverse neurosurgical disciplines including vascular, oncological, spinal, and functional surgeries. Preliminary studies indicate that AR may improve the understanding of complex anatomical structures and offer sufficient accuracy for use as a navigational tool. Additionally, AR shows promise in enhancing surgical training and patient education through interactive 3D models, aiding in the comprehension of intricate anatomical details. Despite its potential, the widespread adoption of AR in clinical settings depends on overcoming technical limitations and validating its clinical efficacy.
Collapse
Affiliation(s)
- Jesse A M van Doormaal
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands.
| | | |
Collapse
|
5
|
Haouchine N, Dorent R, Juvekar P, Torio E, Wells WM, Kapur T, Golby AJ, Frisken S. Learning Expected Appearances for Intraoperative Registration during Neurosurgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14228:227-237. [PMID: 38371724 PMCID: PMC10870253 DOI: 10.1007/978-3-031-43996-4_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.
Collapse
Affiliation(s)
- Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Alexandra J Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
6
|
Ragnhildstveit A, Li C, Zimmerman MH, Mamalakis M, Curry VN, Holle W, Baig N, Uğuralp AK, Alkhani L, Oğuz-Uğuralp Z, Romero-Garcia R, Suckling J. Intra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg 2023; 10:1245851. [PMID: 37671031 PMCID: PMC10476869 DOI: 10.3389/fsurg.2023.1245851] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
Background Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential. Methods A systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery. Results The search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as "development" or "intervention" based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results. Conclusions AR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit.
Collapse
Affiliation(s)
- Anya Ragnhildstveit
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Chao Li
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, England
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England
| | | | - Michail Mamalakis
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Victoria N. Curry
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Willis Holle
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Physics and Astronomy, The University of Utah, Salt Lake City, UT, United States
| | - Noor Baig
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | | | - Layth Alkhani
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Biology, Stanford University, Stanford, CA, United States
| | | | - Rafael Romero-Garcia
- Department of Psychiatry, University of Cambridge, Cambridge, England
- Instituto de Biomedicina de Sevilla (IBiS) HUVR/CSIC/Universidad de Sevilla/CIBERSAM, ISCIII, Dpto. de Fisiología Médica y Biofísica
| | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge, England
| |
Collapse
|
7
|
Kögl FV, Léger É, Haouchine N, Torio E, Juvekar P, Navab N, Kapur T, Pieper S, Golby A, Frisken S. A Tool-free Neuronavigation Method based on Single-view Hand Tracking. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1307-1315. [PMID: 37457380 PMCID: PMC10348700 DOI: 10.1080/21681163.2022.2163428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/30/2022]
Abstract
This work presents a novel tool-free neuronavigation method that can be used with a single RGB commodity camera. Compared with freehand craniotomy placement methods, the proposed system is more intuitive and less error prone. The proposed method also has several advantages over standard neuronavigation platforms. First, it has a much lower cost, since it doesn't require the use of an optical tracking camera or electromagnetic field generator, which are typically the most expensive parts of a neuronavigation system, making it much more accessible. Second, it requires minimal setup, meaning that it can be performed at the bedside and in circumstances where using a standard neuronavigation system is impractical. Our system relies on machine-learning-based hand pose estimation that acts as a proxy for optical tool tracking, enabling a 3D-3D pre-operative to intra-operative registration. Qualitative assessment from clinical users showed that the concept is clinically relevant. Quantitative assessment showed that on average a target registration error (TRE) of 1.3cm can be achieved. Furthermore, the system is framework-agnostic, meaning that future improvements to hand-tracking frameworks would directly translate to a higher accuracy.
Collapse
Affiliation(s)
- Fryderyk Victor Kögl
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Étienne Léger
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Steve Pieper
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Isomics, Inc., Cambridge, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| |
Collapse
|
8
|
The intraoperative use of augmented and mixed reality technology to improve surgical outcomes: A systematic review. Int J Med Robot 2022; 18:e2450. [DOI: 10.1002/rcs.2450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 07/23/2022] [Accepted: 07/27/2022] [Indexed: 11/07/2022]
|
9
|
Boaro A, Moscolo F, Feletti A, Polizzi G, Nunes S, Siddi F, Broekman M, Sala F. Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon. BRAIN & SPINE 2022; 2:100926. [PMID: 36248169 PMCID: PMC9560703 DOI: 10.1016/j.bas.2022.100926] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/23/2022] [Accepted: 08/10/2022] [Indexed: 11/22/2022]
Abstract
Introduction The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations. Results The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
Collapse
Affiliation(s)
- A. Boaro
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Moscolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - A. Feletti
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - G.M.V. Polizzi
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - S. Nunes
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| | - F. Siddi
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
| | - M.L.D. Broekman
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Zuid-Holland, the Netherlands
- Department of Neurosurgery, Leiden University Medical Center, Leiden, Zuid-Holland, the Netherlands
| | - F. Sala
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Italy
| |
Collapse
|