1
|
Chrisochoides N, Liu Y, Drakopoulos F, Kot A, Foteinos P, Tsolakis C, Billias E, Clatz O, Ayache N, Fedorov A, Golby A, Black P, Kikinis R. Comparison of physics-based deformable registration methods for image-guided neurosurgery. Front Digit Health 2023; 5:1283726. [PMID: 38144260 PMCID: PMC10740151 DOI: 10.3389/fdgth.2023.1283726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
2
|
Liaropoulos I, Liaropoulos A, Liaropoulos K. Critical Assessment of Cancer Characterization and Margin Evaluation Techniques in Brain Malignancies: From Fast Biopsy to Intraoperative Flow Cytometry. Cancers (Basel) 2023; 15:4843. [PMID: 37835537 PMCID: PMC10571534 DOI: 10.3390/cancers15194843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 09/29/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023] Open
Abstract
Brain malignancies, given their intricate nature and location, present significant challenges in both diagnosis and treatment. This review critically assesses a range of diagnostic and surgical techniques that have emerged as transformative tools in brain malignancy management. Fast biopsy techniques, prioritizing rapid and minimally invasive tissue sampling, have revolutionized initial diagnostic stages. Intraoperative flow cytometry (iFC) offers real-time cellular analysis during surgeries, ensuring optimal tumor resection. The advent of intraoperative MRI (iMRI) has seamlessly integrated imaging into surgical procedures, providing dynamic feedback and preserving critical brain structures. Additionally, 5-aminolevulinic acid (5-ALA) has enhanced surgical precision by inducing fluorescence in tumor cells, aiding in their complete resection. Several other techniques have been developed in recent years, including intraoperative mass spectrometry methodologies. While each technique boasts unique strengths, they also present potential limitations. As technology and research continue to evolve, these methods are set to undergo further refinement. Collaborative global efforts will be pivotal in driving these advancements, promising a future of improved patient outcomes in brain malignancy management.
Collapse
|
3
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
4
|
Jeung D, Choi H, Ha HG, Oh SH, Hong J. Intraoperative zoom lens calibration for high magnification surgical microscope. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107618. [PMID: 37247472 DOI: 10.1016/j.cmpb.2023.107618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 01/25/2023] [Accepted: 05/18/2023] [Indexed: 05/31/2023]
Abstract
BACKGROUND AND OBJECTIVES An augmented reality (AR)-based surgical guidance system is often used with high-magnification zoom lens systems such as a surgical microscope, particularly in neurology or otolaryngology. To superimpose the internal structures of relevant organs on the microscopy image, an accurate calibration process to obtain the camera intrinsic and hand-eye parameters of the microscope is essential. However, conventional calibration methods are unsuitable for surgical microscopes because of their narrow depth of focus at high magnifications. To realize AR-based surgical guidance with a high-magnification surgical microscope, we herein propose a new calibration method that is applicable to the highest magnification levels as well as low magnifications. METHODS The key idea of the proposed method is to find the relationship between the focal length and the hand-eye parameters, which remains constant regardless of the magnification level. Based on this, even if the magnification changes arbitrarily during surgery, the intrinsic and hand-eye parameters are recalculated quickly and accurately with one or two pictures of the pattern. We also developed a dedicated calibration tool with a prism to take focused pattern images without interfering with the surgery. RESULTS The proposed calibration method ensured an AR error of < 1 mm for all magnification levels. In addition, the variation of focal length was within 1% regardless of the magnification level, and the corresponding variation with the conventional calibration method exceeded 20% at high magnification levels. CONCLUSIONS The comparative study showed that the proposed method has outstanding accuracy and reproducibility for a high-magnification surgical microscope. The proposed calibration method is applicable to various endoscope or microscope systems with zoom lens.
Collapse
Affiliation(s)
- Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-Daero, Daegu 42988, Republic of Korea
| | | | - Ho-Gun Ha
- Division of Intelligent Robot, DGIST, Daegu, Republic of Korea
| | - Seung-Ha Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul, Republic of Korea; Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-Daero, Daegu 42988, Republic of Korea.
| |
Collapse
|
5
|
Kwon KH, Kim MY. Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:4903. [PMID: 37430817 DOI: 10.3390/s23104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 07/12/2023]
Abstract
Image-to-patient registration is a coordinate system matching process between real patients and medical images to actively utilize medical images such as computed tomography (CT) during surgery. This paper mainly deals with a markerless method utilizing scan data of patients and 3D data from CT images. The 3D surface data of the patient are registered to CT data using computer-based optimization methods such as iterative closest point (ICP) algorithms. However, if a proper initial location is not set up, the conventional ICP algorithm has the disadvantages that it takes a long converging time and also suffers from the local minimum problem during the process. We propose an automatic and robust 3D data registration method that can accurately find a proper initial location for the ICP algorithm using curvature matching. The proposed method finds and extracts the matching area for 3D registration by converting 3D CT data and 3D scan data to 2D curvature images and by performing curvature matching between them. Curvature features have characteristics that are robust to translation, rotation, and even some deformation. The proposed image-to-patient registration is implemented with the precise 3D registration of the extracted partial 3D CT data and the patient's scan data using the ICP algorithm.
Collapse
Affiliation(s)
- Ki Hoon Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Min Young Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
6
|
Mishra R, Narayanan MK, Umana GE, Montemurro N, Chaurasia B, Deora H. Virtual Reality in Neurosurgery: Beyond Neurosurgical Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031719. [PMID: 35162742 PMCID: PMC8835688 DOI: 10.3390/ijerph19031719] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023]
Abstract
Background: While several publications have focused on the intuitive role of augmented reality (AR) and virtual reality (VR) in neurosurgical planning, the aim of this review was to explore other avenues, where these technologies have significant utility and applicability. Methods: This review was conducted by searching PubMed, PubMed Central, Google Scholar, the Scopus database, the Web of Science Core Collection database, and the SciELO citation index, from 1989–2021. An example of a search strategy used in PubMed Central is: “Virtual reality” [All Fields] AND (“neurosurgical procedures” [MeSH Terms] OR (“neurosurgical” [All Fields] AND “procedures” [All Fields]) OR “neurosurgical procedures” [All Fields] OR “neurosurgery” [All Fields] OR “neurosurgery” [MeSH Terms]). Using this search strategy, we identified 487 (PubMed), 1097 (PubMed Central), and 275 citations (Web of Science Core Collection database). Results: Articles were found and reviewed showing numerous applications of VR/AR in neurosurgery. These applications included their utility as a supplement and augment for neuronavigation in the fields of diagnosis for complex vascular interventions, spine deformity correction, resident training, procedural practice, pain management, and rehabilitation of neurosurgical patients. These technologies have also shown promise in other area of neurosurgery, such as consent taking, training of ancillary personnel, and improving patient comfort during procedures, as well as a tool for training neurosurgeons in other advancements in the field, such as robotic neurosurgery. Conclusions: We present the first review of the immense possibilities of VR in neurosurgery, beyond merely planning for surgical procedures. The importance of VR and AR, especially in “social distancing” in neurosurgery training, for economically disadvantaged sections, for prevention of medicolegal claims and in pain management and rehabilitation, is promising and warrants further research.
Collapse
Affiliation(s)
- Rakesh Mishra
- Department of Neurosurgery, Institute of Medical Sciences, Banaras Hindu University, Varanasi 221005, India;
| | | | - Giuseppe E. Umana
- Trauma and Gamma-Knife Center, Department of Neurosurgery, Cannizzaro Hospital, 95100 Catania, Italy;
| | - Nicola Montemurro
- Department of Neurosurgery, Azienda Ospedaliera Universitaria Pisana (AOUP), University of Pisa, 56100 Pisa, Italy
- Correspondence:
| | - Bipin Chaurasia
- Department of Neurosurgery, Bhawani Hospital, Birgunj 44300, Nepal;
| | - Harsh Deora
- Department of Neurosurgery, National Institute of Mental Health and Neurosciences, Bengaluru 560029, India;
| |
Collapse
|
7
|
Li Y, Konuthula N, Humphreys IM, Moe K, Hannaford B, Bly R. Real-time virtual intraoperative CT in endoscopic sinus surgery. Int J Comput Assist Radiol Surg 2021; 17:249-260. [PMID: 34888754 DOI: 10.1007/s11548-021-02536-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 11/17/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE Endoscopic sinus surgery (ESS) is typically guided under preoperative computed tomography (CT), which increasingly diverges from actual patient anatomy as the surgery progresses. Studies have reported that the revision surgery rate in ESS ranges between 28 and 47%. This paper presents a method that can update the preoperative CT in real time to improve surgical completeness in ESS. APPROACH The work presents and compares three novel methods that use instrument motion data and anatomical structures to predict surgical modifications in real time. The methods use learning techniques, such as nonparametric filtering and Gaussian process regression, to correlate surgical modifications with instrument tip positions, tip trajectories, and instrument shapes. Preoperative CT image sets are updated with modification predictions to serve as a virtual intraoperative CT. RESULTS The three methods were compared in eight ESS cadaver cases, which were performed by five surgeons and included the following representative ESS operations: maxillary antrostomy, uncinectomy, anterior and posterior ethmoidectomy, and sphenoidotomy. Experimental results showed accuracy metrics that were clinically acceptable with dice similarity coefficients > 86%, with F-score > 92% and precision > 89.91% in surgical completeness evaluation. Among the three methods, the tip trajectory-based estimator had the highest precision of 96.87%. CONCLUSIONS This work demonstrated that virtually modified intraoperative CT scans improved the consistency between the actual surgical scene and the reference model, and could lead to improved surgical completeness in ESS. Compared to actual intraoperative CT scans, the proposed method has no impact on existing surgical protocols, does not require extra hardware, does not expose the patient to radiation, and does not lengthen time under anesthesia.
Collapse
Affiliation(s)
- Yangming Li
- RoCALab, Rochester Institute of Technology, Rochester, 14623, USA.
| | - Neeraja Konuthula
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, 98195, USA
| | - Ian M Humphreys
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, 98195, USA
| | - Kris Moe
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, 98195, USA
| | - Blake Hannaford
- BioRobotics Lab, University of Washington, Seattle, 98195, USA
| | - Randall Bly
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, 98195, USA.,Seattle Children's Hospital, Seattle, 98105, USA
| |
Collapse
|
8
|
Drakopoulos F, Tsolakis C, Angelopoulos A, Liu Y, Yao C, Kavazidi KR, Foroglou N, Fedorov A, Frisken S, Kikinis R, Golby A, Chrisochoides N. Adaptive Physics-Based Non-Rigid Registration for Immersive Image-Guided Neuronavigation Systems. Front Digit Health 2021; 2:613608. [PMID: 34713074 PMCID: PMC8521897 DOI: 10.3389/fdgth.2020.613608] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 12/23/2020] [Indexed: 12/21/2022] Open
Abstract
Objective: In image-guided neurosurgery, co-registered preoperative anatomical, functional, and diffusion tensor imaging can be used to facilitate a safe resection of brain tumors in eloquent areas of the brain. However, the brain deforms during surgery, particularly in the presence of tumor resection. Non-Rigid Registration (NRR) of the preoperative image data can be used to create a registered image that captures the deformation in the intraoperative image while maintaining the quality of the preoperative image. Using clinical data, this paper reports the results of a comparison of the accuracy and performance among several non-rigid registration methods for handling brain deformation. A new adaptive method that automatically removes mesh elements in the area of the resected tumor, thereby handling deformation in the presence of resection is presented. To improve the user experience, we also present a new way of using mixed reality with ultrasound, MRI, and CT. Materials and methods: This study focuses on 30 glioma surgeries performed at two different hospitals, many of which involved the resection of significant tumor volumes. An Adaptive Physics-Based Non-Rigid Registration method (A-PBNRR) registers preoperative and intraoperative MRI for each patient. The results are compared with three other readily available registration methods: a rigid registration implemented in 3D Slicer v4.4.0; a B-Spline non-rigid registration implemented in 3D Slicer v4.4.0; and PBNRR implemented in ITKv4.7.0, upon which A-PBNRR was based. Three measures were employed to facilitate a comprehensive evaluation of the registration accuracy: (i) visual assessment, (ii) a Hausdorff Distance-based metric, and (iii) a landmark-based approach using anatomical points identified by a neurosurgeon. Results: The A-PBNRR using multi-tissue mesh adaptation improved the accuracy of deformable registration by more than five times compared to rigid and traditional physics based non-rigid registration, and four times compared to B-Spline interpolation methods which are part of ITK and 3D Slicer. Performance analysis showed that A-PBNRR could be applied, on average, in <2 min, achieving desirable speed for use in a clinical setting. Conclusions: The A-PBNRR method performed significantly better than other readily available registration methods at modeling deformation in the presence of resection. Both the registration accuracy and performance proved sufficient to be of clinical value in the operating room. A-PBNRR, coupled with the mixed reality system, presents a powerful and affordable solution compared to current neuronavigation systems.
Collapse
Affiliation(s)
- Fotis Drakopoulos
- Center for Real-Time Computing, Old Dominion University, Norfolk, VA, United States
| | - Christos Tsolakis
- Center for Real-Time Computing, Old Dominion University, Norfolk, VA, United States.,Department of Computer Science, Old Dominion University, Norfolk, VA, United States
| | - Angelos Angelopoulos
- Center for Real-Time Computing, Old Dominion University, Norfolk, VA, United States.,Department of Computer Science, Old Dominion University, Norfolk, VA, United States
| | - Yixun Liu
- Center for Real-Time Computing, Old Dominion University, Norfolk, VA, United States
| | - Chengjun Yao
- Department of Neurosurgery, Huashan Hospital, Shanghai, China
| | | | - Nikolaos Foroglou
- Department of Neurosurgery, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States
| | - Alexandra Golby
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States.,Department of Neurosurgery, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States
| | - Nikos Chrisochoides
- Center for Real-Time Computing, Old Dominion University, Norfolk, VA, United States.,Department of Computer Science, Old Dominion University, Norfolk, VA, United States
| |
Collapse
|
9
|
Raghavapudi H, Singroul P, Kohila V. Brain Tumor Causes, Symptoms, Diagnosis and Radiotherapy Treatment. Curr Med Imaging 2021; 17:931-942. [PMID: 33573575 DOI: 10.2174/1573405617666210126160206] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 11/22/2022]
Abstract
The strategy used for the treatment of given brain cancer is critical in determining the post effects and survival. An oncological diagnosis of tumor evaluates a range of parameters such as shape, size, volume, location and neurological complexity that define the symptomatic severity. The evaluation determines a suitable treatment approach chosen from a range of options such as surgery, chemotherapy, hormone therapy, radiation therapy and other targeted therapies. Often, a combination of such therapies is applied to achieve superior results. Radiotherapy serves as a better treatment strategy because of a higher survival rate. It offers the flexibility of synergy with other treatment strategies and fewer side effects on organs at risk. This review presents a radiobiological perspective in the treatment of brain tumor. The cause, symptoms, diagnosis, treatment, post-treatment effects and the framework involved in its elimination are summarized.
Collapse
Affiliation(s)
- Haarika Raghavapudi
- Department of Biotechnology, National Institute of Technology Warangal, Warangal -506004, Telangana, India
| | - Pankaj Singroul
- Department of Biotechnology, National Institute of Technology Warangal, Warangal -506004, Telangana, India
| | - V Kohila
- Department of Biotechnology, National Institute of Technology Warangal, Warangal -506004, Telangana, India
| |
Collapse
|
10
|
Assessment of Position Repeatability Error in an Electromagnetic Tracking System for Surgical Navigation. SENSORS 2020; 20:s20040961. [PMID: 32053941 PMCID: PMC7070586 DOI: 10.3390/s20040961] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 02/04/2020] [Accepted: 02/08/2020] [Indexed: 01/26/2023]
Abstract
In this paper we present a study of the repeatability of an innovative electromagnetic tracking system (EMTS) for surgical navigation, developed to overcome the state of the art of current commercial systems, allowing for the placement of the magnetic field generator far from the operating table. Previous studies led to the development of a preliminary EMTS prototype. Several hardware improvements are described, which result in noise reduction in both signal generation and the measurement process, as shown by experimental tests. The analysis of experimental results has highlighted the presence of drift in voltage components, whose effect has been quantified and related to the variation of the sensor position. Repeatability in the sensor position measurement is evaluated by means of the propagation of the voltage repeatability error, and the results are compared with the performance of the Aurora system (which represents the state of the art for EMTS for surgical navigation), showing a repeatability error about ten times lower. Finally, the proposed improvements aim to overcome the limited operating distance between the field generator and electromagnetic (EM) sensors provided by commercial EM tracking systems for surgical applications and seem to provide a not negligible technological advantage.
Collapse
|
11
|
Meulstee JW, Nijsink J, Schreurs R, Verhamme LM, Xi T, Delye HHK, Borstlap WA, Maal TJJ. Toward Holographic-Guided Surgery. Surg Innov 2018; 26:86-94. [DOI: 10.1177/1553350618799552] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The implementation of augmented reality (AR) in image-guided surgery (IGS) can improve surgical interventions by presenting the image data directly on the patient at the correct position and in the actual orientation. This approach can resolve the switching focus problem, which occurs in conventional IGS systems when the surgeon has to look away from the operation field to consult the image data on a 2-dimensional screen. The Microsoft HoloLens, a head-mounted AR display, was combined with an optical navigation system to create an AR-based IGS system. Experiments were performed on a phantom model to determine the accuracy of the complete system and to evaluate the effect of adding AR. The results demonstrated a mean Euclidean distance of 2.3 mm with a maximum error of 3.5 mm for the complete system. Adding AR visualization to a conventional system increased the mean error by 1.6 mm. The introduction of AR in IGS was promising. The presented system provided a solution for the switching focus problem and created a more intuitive guidance system. With a further reduction in the error and more research to optimize the visualization, many surgical applications could benefit from the advantages of AR guidance.
Collapse
Affiliation(s)
| | - Johan Nijsink
- Radboud University Medical Center, Nijmegen, Netherlands
| | - Ruud Schreurs
- Radboud University Medical Center, Nijmegen, Netherlands
- Academic Medical Center, Amsterdam, Netherlands
| | | | - Tong Xi
- Radboud University Medical Center, Nijmegen, Netherlands
| | | | | | | |
Collapse
|
12
|
Kim S, Kazanzides P. Fiducial-based registration with a touchable region model. Int J Comput Assist Radiol Surg 2016; 12:277-289. [PMID: 27581335 DOI: 10.1007/s11548-016-1477-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Accepted: 08/19/2016] [Indexed: 11/27/2022]
Abstract
PURPOSE Image-guided surgery requires registration between an image coordinate system and an intraoperative coordinate system that is typically referenced to a tracking device. In fiducial-based registration methods, this is achieved by localizing points (fiducials) in each coordinate system. Often, both localizations are performed manually, first by picking a fiducial point in the image and then by using a hand-held tracked pointer to physically touch the corresponding fiducial on the patient. These manual procedures introduce localization error that is user-dependent and can significantly decrease registration accuracy. Thus, there is a need for a registration method that is tolerant of imprecise fiducial localization in the preoperative and intraoperative phases. METHODS We propose the iterative closest touchable point (ICTP) registration framework, which uses model-based localization and a touchable region model. This method consists of three stages: (1) fiducial marker localization in image space, using a fiducial marker model, (2) initial registration with paired-point registration, and (3) fine registration based on the iterative closest point method. RESULTS We perform phantom experiments with a fiducial marker design that is commonly used in neurosurgery. The results demonstrate that ICTP can provide accuracy improvements compared to the standard paired-point registration method that is widely used for surgical navigation and surgical robot systems, especially in cases where the surgeon introduces large localization errors. CONCLUSIONS The results demonstrate that the proposed method can reduce the effect of the surgeon's localization performance on the accuracy of registration, thereby producing more consistent and less user-dependent registration outcomes.
Collapse
Affiliation(s)
- Sungmin Kim
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
13
|
Optical coordinate tracking system using afocal optics for image-guided surgery. Int J Comput Assist Radiol Surg 2014; 10:231-41. [PMID: 24898406 DOI: 10.1007/s11548-014-1082-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2014] [Accepted: 05/21/2014] [Indexed: 10/25/2022]
Abstract
PURPOSE Image-guided surgery using medical robots supports surgeons by providing critical real-time feedback information, such as surgical instrument tracking, patient-specific models, and the use of surgery robots. An image-guided surgery system based on afocal optics was developed to overcome the problems associated with conventional optical tracking systems. METHOD An optical tracking system was developed that utilizes afocal optics. Instead of using geometrically specified marker spheres as tracking targets, the proposed system uses a marker with a lens and a micro-engraved data-coded pattern. A position and orientation-tracking algorithm was developed to utilize the observed afocal images of the marker patterns. The marker used in this tracking system can be manufactured in a smaller size than traditional optical tracker markers, and the accuracy of the proposed tracking system has significant potential for improvement due to its focused and highly magnified image. The system was tested in vitro on an optical bench with position and orientation measurement experiments using a commercial optical tracker, Polaris Vicra (NDI Corp.) for comparison. RESULTS The afocal optical system provided accuracy in position and orientation that was equal or better than a commercial optical tracker system, and provided a high degree of consistency during in vitro testing. The position error was 21μ m, and the orientation error was 0.093°. CONCLUSION An afocal optical tracker is feasible and potentially advantageous for surgical navigation, as it is expected to have fewer occlusions and provide greater efficiency for coordinate matching and tracking of patient-specific models, surgical instruments, and surgery robots. This promising new system requires in vivo testing.
Collapse
|
14
|
Linte CA, Davenport KP, Cleary K, Peters C, Vosburgh KG, Navab N, Edwards PE, Jannin P, Peters TM, Holmes DR, Robb RA. On mixed reality environments for minimally invasive therapy guidance: systems architecture, successes and challenges in their implementation from laboratory to clinic. Comput Med Imaging Graph 2013; 37:83-97. [PMID: 23632059 PMCID: PMC3796657 DOI: 10.1016/j.compmedimag.2012.12.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2012] [Revised: 11/16/2012] [Accepted: 12/24/2012] [Indexed: 11/21/2022]
Abstract
Mixed reality environments for medical applications have been explored and developed over the past three decades in an effort to enhance the clinician's view of anatomy and facilitate the performance of minimally invasive procedures. These environments must faithfully represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical instrument tracking, and display technology into a common framework centered around and registered to the patient. However, in spite of their reported benefits, few mixed reality environments have been successfully translated into clinical use. Several challenges that contribute to the difficulty in integrating such environments into clinical practice are presented here and discussed in terms of both technical and clinical limitations. This article should raise awareness among both developers and end-users toward facilitating a greater application of such environments in the surgical practice of the future.
Collapse
|
15
|
Shaikhouni A, Elder JB. Computers and neurosurgery. World Neurosurg 2012; 78:392-8. [PMID: 22985531 DOI: 10.1016/j.wneu.2012.08.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2012] [Accepted: 08/22/2012] [Indexed: 11/19/2022]
Abstract
At the turn of the twentieth century, the only computational device used in neurosurgical procedures was the brain of the surgeon. Today, most neurosurgical procedures rely at least in part on the use of a computer to help perform surgeries accurately and safely. The techniques that revolutionized neurosurgery were mostly developed after the 1950s. Just before that era, the transistor was invented in the late 1940s, and the integrated circuit was invented in the late 1950s. During this time, the first automated, programmable computational machines were introduced. The rapid progress in the field of neurosurgery not only occurred hand in hand with the development of modern computers, but one also can state that modern neurosurgery would not exist without computers. The focus of this article is the impact modern computers have had on the practice of neurosurgery. Neuroimaging, neuronavigation, and neuromodulation are examples of tools in the armamentarium of the modern neurosurgeon that owe each step in their evolution to progress made in computer technology. Advances in computer technology central to innovations in these fields are highlighted, with particular attention to neuroimaging. Developments over the last 10 years in areas of sensors and robotics that promise to transform the practice of neurosurgery further are discussed. Potential impacts of advances in computers related to neurosurgery in developing countries and underserved regions are also discussed. As this article illustrates, the computer, with its underlying and related technologies, is central to advances in neurosurgery over the last half century.
Collapse
Affiliation(s)
- Ammar Shaikhouni
- Department of Neurological Surgery, Wexner Medical Center, Ohio State University, Columbus, OH, USA
| | | |
Collapse
|
16
|
Nett BE, Aagaard-Kienitz B, Serarslan Y, Başkaya MK, Chen GH. A simple technique for interventional tool placement combining fluoroscopy with interventional computed tomography on a C-arm system. Neurosurgery 2010; 67:ons49-56; discussion ons56-7. [PMID: 20679948 DOI: 10.1227/01.neu.0000382976.18891.50] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Flat-panel cone-beam computed tomography (FP-CBCT) has recently been introduced as a clinical feature in neuroangiography radiographic C-arm systems. OBJECTIVE To introduce a method of positioning a surgical tool such as a needle or ablation probe within a target specified by intraoperative FP-CBCT scanning. METHODS Two human cadaver and 2 porcine cadaver heads were injected with a mixture of silicone and contrast agent to simulate a contrast-enhanced tumor. Preoperative imaging was performed using a standard 1.5-T magnetic resonance imaging scanner. Intraoperative imaging was used to define the needle trajectory on a GE Innova 4100 flat panel-based neuroangiography C-arm system. RESULTS Using a combination of FP-CBCT and fluoroscopy, a needle was successfully positioned within each of the simulated contrast-enhanced tumors, as verified by subsequent FP-CBCT scans. CONCLUSIONS This proof-of-concept study demonstrates the potential utility of combining FP-CBCT scanning with fluoroscopy to position surgical tools when stereotactic devices and image-guided surgery systems are not available. However, further work is required to fully characterize the precision and accuracy of the method in a variety of realistic surgical sites.
Collapse
Affiliation(s)
- Brian E Nett
- Department of Medical Physics, University of Wisconsin-Madison, School of Medicine and Public Health, Madison, Wisconsin 53705, USA
| | | | | | | | | |
Collapse
|
17
|
Nabavi A, Mamisch CT, Gering DT, Kacher DF, Pergolizzi RS, Wells WM, Kikinis R, McL Black P, Jolesz FA. Image-guided therapy and intraoperative MRI in neurosurgery. MINIM INVASIV THER 2010; 9:277-86. [DOI: 10.1080/13645700009169658] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
18
|
Merck D, Tracton G, Saboo R, Levy J, Chaney E, Pizer S, Joshi S. Training models of anatomic shape variability. Med Phys 2008; 35:3584-96. [PMID: 18777919 DOI: 10.1118/1.2940188] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT/ART.
Collapse
Affiliation(s)
- Derek Merck
- Medical Image Display & Analysis Group, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA.
| | | | | | | | | | | | | |
Collapse
|
19
|
|
20
|
Cao A, Thompson RC, Dumpuri P, Dawant BM, Galloway RL, Ding S, Miga MI. Laser range scanning for image-guided neurosurgery: investigation of image-to-physical space registrations. Med Phys 2008; 35:1593-605. [PMID: 18491553 DOI: 10.1118/1.2870216] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
In this article a comprehensive set of registration methods is utilized to provide image-to-physical space registration for image-guided neurosurgery in a clinical study. Central to all methods is the use of textured point clouds as provided by laser range scanning technology. The objective is to perform a systematic comparison of registration methods that include both extracranial (skin marker point-based registration (PBR), and face-based surface registration) and intracranial methods (feature PBR, cortical vessel-contour registration, a combined geometry/intensity surface registration method, and a constrained form of that method to improve robustness). The platform facilitates the selection of discrete soft-tissue landmarks that appear on the patient's intraoperative cortical surface and the preoperative gadolinium-enhanced magnetic resonance (MR) image volume, i.e., true corresponding novel targets. In an 11 patient study, data were taken to allow statistical comparison among registration methods within the context of registration error. The results indicate that intraoperative face-based surface registration is statistically equivalent to traditional skin marker registration. The four intracranial registration methods were investigated and the results demonstrated a target registration error of 1.6 +/- 0.5 mm, 1.7 +/- 0.5 mm, 3.9 +/- 3.4 mm, and 2.0 +/- 0.9 mm, for feature PBR, cortical vessel-contour registration, unconstrained geometric/intensity registration, and constrained geometric/intensity registration, respectively. When analyzing the results on a per case basis, the constrained geometric/intensity registration performed best, followed by feature PBR, and finally cortical vessel-contour registration. Interestingly, the best target registration errors are similar to targeting errors reported using bone-implanted markers within the context of rigid targets. The experience in this study as with others is that brain shift can compromise extracranial registration methods from the earliest stages. Based on the results reported here, organ-based approaches to registration would improve this, especially for shallow lesions.
Collapse
Affiliation(s)
- Aize Cao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235, USA
| | | | | | | | | | | | | |
Collapse
|
21
|
Phillips JM, Liu R, Tomasi C. Outlier Robust ICP for Minimizing Fractional RMSD. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/3dim.2007.39] [Citation(s) in RCA: 99] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
22
|
Vogt S, Khamene A, Sauer F. Reality Augmentation for Medical Procedures: System Architecture, Single Camera Marker Tracking, and System Evaluation. Int J Comput Vis 2006. [DOI: 10.1007/s11263-006-7938-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
23
|
Sinha TK, Miga MI, Cash DM, Weil RJ. Intraoperative cortical surface characterization using laser range scanning: preliminary results. Neurosurgery 2006; 59:ONS368-76; discussion ONS376-7. [PMID: 17041506 PMCID: PMC3819165 DOI: 10.1227/01.neu.0000222665.40301.d2] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE To present a novel methodology that uses a laser range scanner (LRS) capable of generating textured (intensity-encoded) surface descriptions of the brain surface for use with image-to-patient registration and improved cortical feature recognition during intraoperative neurosurgical navigation. METHODS An LRS device was used to acquire cortical surface descriptions of eight patients undergoing neurosurgery for a variety of clinical presentations. Textured surface descriptions were generated from these intraoperative acquisitions for each patient. Corresponding textured surfaces were also generated from each patient's preoperative magnetic resonance tomograms. Each textured surface pair (LRS and magnetic resonance tomogram) was registered using only cortical surface information. Novel visualization of the combined surfaces allowed for registration assessment based on quantitative cortical feature alignment. RESULTS Successful textured LRS surface acquisition and generation was performed on all eight patients. The data acquired by the LRS accurately presented the intraoperative surface of the cortex and the associated features within the surgical field-of-view. Registration results are presented as overlays of the intraoperative data with respect to the preoperative data and quantified by comparing mean distances between cortical features on the magnetic resonance tomogram and LRS surfaces after registration. The overlays demonstrated that accurate registration can be provided between the preoperative and intraoperative data and emphasized a potential enhancement to cortical feature recognition within the operating room environment. Using the best registration result from each clinical case, the mean feature alignment error is 1.7 +/- 0.8 mm over all cases. CONCLUSION This study demonstrates clinical deployment of an LRS capable of generating textured surfaces of the surgical field of view. Data from the LRS was registered accurately to the corresponding preoperative data. Visual inspection of the registration results was provided by overlays that put the intraoperative data within the perspective of the whole brain's surface. These visuals can be used to more readily assess the fidelity of image-to-patient registration, as well as to enhance recognition of cortical features for assistance in comparing the neurotopography between magnetic resonance image volume and physical patient. In addition, the feature-rich data presented here provides considerable motivation for using LRS scanning to measure deformation during surgery.
Collapse
Affiliation(s)
- Tuhin K Sinha
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235, USA
| | | | | | | |
Collapse
|
24
|
Das M, Sauer F, Schoepf UJ, Khamene A, Vogt SK, Schaller S, Kikinis R, vanSonnenberg E, Silverman SG. Augmented Reality Visualization for CT-guided Interventions: System Description, Feasibility, and Initial Evaluation in an Abdominal Phantom. Radiology 2006; 240:230-5. [PMID: 16720866 DOI: 10.1148/radiol.2401040018] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
UNLABELLED The purpose of this study was to evaluate the feasibility and performance of an augmented reality (AR) visualization prototype for virtual computed tomography (CT)-guided interventional procedures in a multimodality abdominal phantom. With the aid of AR guidance, three radiologists performed 30 attempts at targeting simulated liver lesions of different sizes (range, 5-15 mm) with a biopsy needle. The position of the needle tip relative to the lesion was verified by using ultrasonography and CT. With AR guidance, lesions were successfully targeted with the first needle pass in all cases. On the basis of these results, AR visualization for CT-guided intervention appears feasible and allows intuitive and accurate lesion targeting in a phantom. SUPPLEMENTAL MATERIAL radiology.rsnajnls.org/cgi/content/full/2401040018/DC1
Collapse
Affiliation(s)
- Marco Das
- Dept of Radiology, Brigham and Women's Hosp, Harvard Medical School, Boston, Mass, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
25
|
Clatz O, Delingette H, Talos IF, Golby AJ, Kikinis R, Jolesz FA, Ayache N, Warfield SK. Robust nonrigid registration to capture brain shift from intraoperative MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1417-27. [PMID: 16279079 PMCID: PMC2042023 DOI: 10.1109/tmi.2005.856734] [Citation(s) in RCA: 125] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We present a new algorithm to register 3-D preoperative magnetic resonance (MR) images to intraoperative MR images of the brain which have undergone brain shift. This algorithm relies on a robust estimation of the deformation from a sparse noisy set of measured displacements. We propose a new framework to compute the displacement field in an iterative process, allowing the solution to gradually move from an approximation formulation (minimizing the sum of a regularization term and a data error term) to an interpolation formulation (least square minimization of the data error term). An outlier rejection step is introduced in this gradual registration process using a weighted least trimmed squares approach, aiming at improving the robustness of the algorithm. We use a patient-specific model discretized with the finite element method in order to ensure a realistic mechanical behavior of the brain tissue. To meet the clinical time constraint, we parallelized the slowest step of the algorithm so that we can perform a full 3-D image registration in 35 s (including the image update time) on a heterogeneous cluster of 15 personal computers. The algorithm has been tested on six cases of brain tumor resection, presenting a brain shift of up to 14 mm. The results show a good ability to recover large displacements, and a limited decrease of accuracy near the tumor resection cavity.
Collapse
Affiliation(s)
- Olivier Clatz
- Epidaure Research Project, INRIA Sophia Antipolis, 06902 Sophia Antipolis Cedex, France.
| | | | | | | | | | | | | | | |
Collapse
|
26
|
Fennessy FM, Tempany CM. MRI-guided focused ultrasound surgery of uterine leiomyomas. Radiology 2005; 12:1158-66. [PMID: 16099686 DOI: 10.1016/j.acra.2005.05.018] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2004] [Revised: 05/16/2005] [Accepted: 05/25/2005] [Indexed: 11/21/2022]
Abstract
Uterine fibroids are the most common pelvic tumors in women and are a significant cause of morbidity for women of reproductive age. Today, there are a variety of less invasive alternatives available to hysterectomy, such as myomectomy, hormonal therapy, uterine artery embolization, and more recently magnetic resonance-guided focused ultrasound surgery (MRgFUS). With this technique, ultrasound waves are focused through intact skin of the anterior abdominal wall resulting in localized thermal tissue ablation, monitored by online MR temperature control. By using an effective combination of image guidance and energy delivery, MRgFUS therefore allows for preservation of uterine function while obviating the need for a minimally invasive procedure or surgery.
Collapse
Affiliation(s)
- Fiona M Fennessy
- Section of Abdominal Imaging and Intervention, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA.
| | | |
Collapse
|
27
|
Wacker FK, Vogt S, Khamene A, Sauer F, Wendt M, Duerk JL, Lewin JS, Wolf KJ. MR image-guided needle biopsies with a combination of augmented reality and MRI: A pilot study in phantoms and animals. ACTA ACUST UNITED AC 2005. [DOI: 10.1016/j.ics.2005.03.300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Abstract
Computer-assisted surgery (CAS) utilizing robotic or image-guided technologies has been introduced into various orthopedic fields. Navigation and robotic systems are the most advanced parts of CAS, and their range of functions and applications is increasing. Surgical navigation is a visualization system that gives positional information about surgical tools or implants relative to a target organ (bone) on a computer display. There are three types of surgical planning that involve navigation systems. One makes use of volumetric images, such as computed tomography, magnetic resonance imaging, or ultrasound echograms. Another makes use of intraoperative fluoroscopic images. The last type makes use of kinetic information about joints or morphometric information about the target bones obtained intraoperatively. Systems that involve these planning methods are called volumetric image-based navigation, fluoroscopic navigation, and imageless navigation, respectively. To overcome the inaccuracy of hand-controlled positioning of surgical tools, three robotic systems have been developed. One type directs a cutting guide block or a drilling guide sleeve, with surgeons sliding a bone saw or a drill bit through the guide instrument to execute a surgical action. Another type constrains the range of movement of a surgical tool held by a robot arm such as ACROBOT. The last type is an active system, such as ROBODOC or CASPAR, which directs a milling device automatically according to preoperative planning. These CAS systems, their potential, and their limitations are reviewed here. Future technologies and future directions of CAS that will help provide improved patient outcomes in a cost-effective manner are also discussed.
Collapse
Affiliation(s)
- Nobuhiko Sugano
- Department of Orthopaedic Surgery, Osaka Graduate School of Medicine, 2-2 Yamadaoka, Suita 565-0871, Japan
| |
Collapse
|
29
|
Miga MI, Sinha TK, Cash DM, Galloway RL, Weil RJ. Cortical surface registration for image-guided neurosurgery using laser-range scanning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:973-85. [PMID: 12906252 PMCID: PMC3819811 DOI: 10.1109/tmi.2003.815868] [Citation(s) in RCA: 76] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 +/- 0.2, 2.0 +/- 0.3, and 1.2 +/- 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 +/- 1.0, 0.9 +/- 0.6, and 1.3 +/- 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework.
Collapse
Affiliation(s)
- Michael I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | | | | | | | | |
Collapse
|
30
|
An Augmented Reality Navigation System with a Single-Camera Tracker: System Design and Needle Biopsy Phantom Trial. ACTA ACUST UNITED AC 2002. [DOI: 10.1007/3-540-45787-9_15] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
31
|
Abstract
The human body has been depicted in ancient cave-paintings, in primitively sculpted figures, and through all the ages in various forms of artistic expression. The earliest medical texts were descriptive but not illustrated. Later, as it became clear that knowledge of the human body and all its systems was essential to the practice of healing, texts were accompanied by illustrations which became an integral part of the teaching process. The illustrators included artists, whose interest was primarily artistic, but who were sometimes employed by surgeons or physicians to illustrate their texts. Occasionally, the physicians or scientists accompanied their texts with their own illustrations, and in the last century, medical illustration, in its infinite variety of techniques, has been developed as a profession in its own right. As knowledge was extended, permitted by social and cultural change, as well as by technological advances, the types of illustrations have ranged from gross anatomy through dissections showing the various organ systems, histological preparations, and radiological images, right up to the computerized digital imagery that is available today, which allows both static and dynamic two- and three-dimensional representations to be transmitted electronically across the world in a matter of seconds. The techniques used to represent medical knowledge pictorially have been as varied as the illustrators themselves, involving drawing, engraving, printing, photography, cinematography and digital processing. Each new technique has built on previous experience to broaden medical knowledge and make it accessible to an ever-widening audience. This vast accumulation of pictorial material has posed considerable problems of storage, cataloguing, retrieval, display and dissemination of the information, as well as questions of ethics, validity, manipulation and reliability. This paper traces these developments, illustrating them with representative examples drawn from the inexhaustible store of documents accumulated over more than two millennia.
Collapse
Affiliation(s)
- J Tsafrir
- Medical Library, Chaim Sheba Medical Center, Tel Hashomer, Israel.
| | | |
Collapse
|
32
|
Boag AH, Kennedy LA, Miller MJ. Three-dimensional microscopic image reconstruction of prostatic adenocarcinoma. Arch Pathol Lab Med 2001; 125:562-6. [PMID: 11260639 DOI: 10.5858/2001-125-0562-tdmiro] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
CONTEXT Routine microscopy provides only a 2-dimensional view of the complex 3-dimensional structure that makes up human tissue. Three-dimensional microscopic image reconstruction has not been described previously for prostate cancer. OBJECTIVES To develop a simple method of computerized 3-dimensional image reconstruction and to demonstrate its applicability to the study of prostatic adenocarcinoma. METHODS Serial sections were cut from archival paraffin-embedded prostate specimens, immunostained using antikeratin CAM5.2, and digitally imaged. Computer image-rendering software was used to produce 3-dimensional image reconstructions of prostate cancer of varying Gleason grades, normal prostate, and prostatic intraepithelial neoplasia. RESULTS The rendering system proved easy to use and provided good-quality 3-dimensional images of most specimens. Normal prostate glands formed irregular fusiform structures branching off central tubular ducts. Prostatic intraepithelial neoplasia showed external contours similar to those of normal glands, but with a markedly complex internal arrangement of branching lumens. Gleason grade 3 carcinoma was found to consist of a complex array of interconnecting tubules rather than the apparently separate glands seen in 2 dimensions on routine light microscopy. Gleason grade 4 carcinoma demonstrated a characteristic form of glandular fusion that was readily visualized by optically sectioning and rotating the reconstructed images. CONCLUSIONS Computerized 3-dimensional microscopic imaging holds great promise as an investigational tool. By revealing the structural relationships of the various Gleason grades of prostate cancer, this method could be used to refine diagnostic and grading criteria for this common tumor.
Collapse
Affiliation(s)
- A H Boag
- Department of Pathology, Queen's University at Kingston, Kingston, Ontario, Canada K7L 3N6.
| | | | | |
Collapse
|
33
|
Saucer F, Khamene A, Bascle B, Rubino GJ. A Head-Mounted Display System for Augmented Reality Image Guidance: Towards Clinical Evaluation for iMRI-guided Nuerosurgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2001 2001. [DOI: 10.1007/3-540-45468-3_85] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
34
|
Miga MI, Paulsen KD, Hoopes PJ, Kennedy FE, Hartov A, Roberts DW. In vivo modeling of interstitial pressure in the brain under surgical load using finite elements. J Biomech Eng 2000; 122:354-63. [PMID: 11036558 DOI: 10.1115/1.1288207] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Current brain deformation models have predominantly reflected solid constitutive relationships generated from empirical ex vivo data and have largely overlooked interstitial hydrodynamic effects. In the context of a technique to update images intraoperatively for image-guided neuronavigation, we have developed and quantified the deformation characteristics of a three-dimensional porous media finite element model of brain deformation in vivo. Results have demonstrated at least 75-85 percent predictive capability, but have also indicated that interstitial hydrodynamics are important. In this paper we investigate interstitial pressure transient behavior in brain tissue when subjected to an acute surgical load consistent with neurosurgical events. Data are presented from three in vivo porcine experiments where subsurface tissue deformation and interhemispheric pressure gradients were measured under conditions of an applied mechanical deformation and then compared to calculations with our three-dimensional brain model. Results demonstrate that porous-media consolidation captures the hydraulic behavior of brain tissue subjected to comparable surgical loads and that the experimental protocol causes minimal trauma to porcine brain tissue. Working values for hydraulic conductivity of white and gray matter are also reported and an assessment of transient pressure gradient effects with respect to deformation is provided.
Collapse
Affiliation(s)
- M I Miga
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA.
| | | | | | | | | | | |
Collapse
|
35
|
Affiliation(s)
- B Allan
- Moorfields Eye Hospital, City Road, London EC1V 2PD
| |
Collapse
|
36
|
Abstract
Image-guided navigation for surgery and other therapeutic interventions has grown in importance in recent years. During image-guided navigation a target is detected, localized and characterized for diagnosis and therapy. Thus, images are used to select, plan, guide and evaluate therapy, thereby reducing invasiveness and improving outcomes. A shift from traditional open surgery to less-invasive image-guided surgery will continue to impact the surgical marketplace. Increases in the speed and capacity of computers and computer networks have enabled image-guided interventions. Key elements in image navigation systems are pre-operative 3D imaging (or real-time image acquisition), a graphical display and interactive input devices, such as surgical instruments with light emitting diodes (LEDs). CT and MRI, 3D imaging devices, are commonplace today and 3D images are useful in complex interventions such as radiation oncology and surgery. For example, integrated surgical imaging workstations can be used for frameless stereotaxy during neurosurgical interventions. In addition, imaging systems are being expanded to include decision aids in diagnosis and treatment. Electronic atlases, such as Voxel Man or others derived from the Visible Human Project, combine a set of image data with non-image knowledge such as anatomic labels. Robot assistants and magnetic guidance technology are being developed for minimally invasive surgery and other therapeutic interventions. Major progress is expected at the interface between the disciplines of radiology and surgery where imaging, intervention and informatics converge.
Collapse
Affiliation(s)
- M W Vannier
- Department of Radiology, University of Iowa, Iowa City 52252, USA.
| | | |
Collapse
|