1
|
Rahmani M, Moghaddasi H, Pour-Rashidi A, Ahmadian A, Najafzadeh E, Farnia P. D 2BGAN: Dual Discriminator Bayesian Generative Adversarial Network for Deformable MR-Ultrasound Registration Applied to Brain Shift Compensation. Diagnostics (Basel) 2024; 14:1319. [PMID: 39001209 PMCID: PMC11240784 DOI: 10.3390/diagnostics14131319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/30/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.
Collapse
Affiliation(s)
- Mahdiyeh Rahmani
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Hadis Moghaddasi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ahmad Pour-Rashidi
- Department of Neurosurgery, Sina Hospital, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 11367469111, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ebrahim Najafzadeh
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran 1417466191, Iran
- Department of Molecular Imaging, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran 1449614535, Iran
| | - Parastoo Farnia
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| |
Collapse
|
2
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
3
|
Kim JT, Di L, Etame AB, Olson S, Vogelbaum MA, Tran ND. Use of virtual magnetic resonance imaging to compensate for brain shift during image-guided surgery: illustrative case. JOURNAL OF NEUROSURGERY: CASE LESSONS 2022; 3:CASE21683. [PMID: 35733635 PMCID: PMC9204912 DOI: 10.3171/case21683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 04/20/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Maximal safe resection is the paramount objective in the surgical management of malignant brain tumors. It is facilitated through use of image-guided neuronavigation. Intraoperative image guidance systems use preoperative magnetic resonance imaging (MRI) as the navigational map. The accuracy of neuronavigation is limited by intraoperative brain shift and can become less accurate over the course of the procedure. Intraoperative MRI can compensate for dynamic brain shift but requires significant space and capital investment, often unavailable at many centers. OBSERVATIONS The authors described a case in which an image fusion algorithm was used in conjunction with an intraoperative computed tomography (CT) system to compensate for brain shift during resection of a brainstem hemorrhagic melanoma metastasis. Following initial debulking of the hemorrhagic metastasis, intraoperative CT was performed to ascertain extent of resection. An elastic image fusion (EIF) algorithm was used to create virtual MRI relative to both the intraoperative CT scan and preoperative MRI, which facilitated complete resection of the tumor while preserving critical brainstem anatomy. LESSONS EIF algorithms can be used with multimodal images (preoperative MRI and intraoperative CT) and create an updated virtual MRI data set to compensate for brain shift in neurosurgery and aid in maximum safe resection of malignant brain tumors.
Collapse
Affiliation(s)
- John T. Kim
- Department of Neurosurgery, University of South Florida, Tampa, Florida; and
| | - Long Di
- Department of Neurosurgery, University of South Florida, Tampa, Florida; and
| | - Arnold B. Etame
- Department of Neurosurgery, University of South Florida, Tampa, Florida; and
- Department of Neuro-Oncology, Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Sarah Olson
- Department of Neuro-Oncology, Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Michael A. Vogelbaum
- Department of Neurosurgery, University of South Florida, Tampa, Florida; and
- Department of Neuro-Oncology, Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Nam D. Tran
- Department of Neurosurgery, University of South Florida, Tampa, Florida; and
- Department of Neuro-Oncology, Moffitt Cancer Center and Research Institute, Tampa, Florida
| |
Collapse
|
4
|
Riva M, Hiepe P, Frommert M, Divenuto I, Gay LG, Sciortino T, Nibali MC, Rossi M, Pessina F, Bello L. Intraoperative Computed Tomography and Finite Element Modelling for Multimodal Image Fusion in Brain Surgery. Oper Neurosurg (Hagerstown) 2021; 18:531-541. [PMID: 31342073 DOI: 10.1093/ons/opz196] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 04/16/2019] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND intraoperative computer tomography (iCT) and advanced image fusion algorithms could improve the management of brainshift and the navigation accuracy. OBJECTIVE To evaluate the performance of an iCT-based fusion algorithm using clinical data. METHODS Ten patients with brain tumors were enrolled; preoperative MRI was acquired. The iCT was applied at the end of microsurgical resection. Elastic image fusion of the preoperative MRI to iCT data was performed by deformable fusion employing a biomechanical simulation based on a finite element model. Fusion accuracy was evaluated: the target registration error (TRE, mm) was measured for rigid and elastic fusion (Rf and Ef) and anatomical landmark pairs were divided into test and control structures according to distinct involvement by the brainshift. Intraoperative points describing the stereotactic position of the brain were also acquired and a qualitative evaluation of the adaptive morphing of the preoperative MRI was performed by 5 observers. RESULTS The mean TRE for control and test structures with Rf was 1.81 ± 1.52 and 5.53 ± 2.46 mm, respectively. No significant change was observed applying Ef to control structures; the test structures showed reduced TRE values of 3.34 ± 2.10 mm after Ef (P < .001). A 32% average gain (range 9%-54%) in accuracy of image registration was recorded. The morphed MRI showed robust matching with iCT scans and intraoperative stereotactic points. CONCLUSIONS The evaluated method increased the registration accuracy of preoperative MRI and iCT data. The iCT-based non-linear morphing of the preoperative MRI can potentially enhance the consistency of neuronavigation intraoperatively.
Collapse
Affiliation(s)
- Marco Riva
- Department of Medical Biotechnology and Translational Medicine, Università degli Studi di Milano, Milan, Italy.,Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | | | | | - Ignazio Divenuto
- Unit of Neuroradiology, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Lorenzo G Gay
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Tommaso Sciortino
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Marco Conti Nibali
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Marco Rossi
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Federico Pessina
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy.,Department of Biomedical Sciences, Humanitas University, Rozzano, Italy
| | - Lorenzo Bello
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy.,Department of Oncology and Hemato-oncology, Università degli Studi di Milano, Milan, Italy
| |
Collapse
|
5
|
Abstract
This article discusses intraoperative imaging techniques used during high-grade glioma surgery. Gliomas can be difficult to differentiate from surrounding tissue during surgery. Intraoperative imaging helps to alleviate problems encountered during glioma surgery, such as brain shift and residual tumor. There are a variety of modalities available all of which aim to give the surgeon more information, address brain shift, identify residual tumor, and increase the extent of surgical resection. The article starts with a brief introduction followed by a review of with the latest advances in intraoperative ultrasound, intraoperative MRI, and intraoperative computed tomography.
Collapse
Affiliation(s)
- Thomas Noh
- Department of Neurosurgery, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115, USA; Hawaii Pacific Health, John A Burns School of Medicine, Honolulu, Hawaii, USA
| | - Martina Mustroph
- Department of Neurosurgery, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115, USA; Harvard Medical School, Boston, Massachusetts, USA
| | - Alexandra J Golby
- Department of Neurosurgery, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115, USA; Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.
| |
Collapse
|
6
|
Automatic and efficient MRI-US segmentations for improving intraoperative image fusion in image-guided neurosurgery. NEUROIMAGE-CLINICAL 2019; 22:101766. [PMID: 30901714 PMCID: PMC6425116 DOI: 10.1016/j.nicl.2019.101766] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 01/20/2019] [Accepted: 03/10/2019] [Indexed: 11/24/2022]
Abstract
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s. We demonstrate that our segmentation-based registration increases accuracy, robustness, and speed of multi-modal image registration of preoperative MRI and intraoperative ultrasound images for improving intraoperative image guided neurosurgery. For this we provide a fast and efficient segmentation of central anatomical structures of the perifalcine region on ultrasound images. We demonstrate the advantages of our method by comparing the results of our segmentation-based registration with the initial registration provided by the navigation system and with an intensity-based registration approach.
Collapse
|
7
|
Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells W. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching. Int J Comput Assist Radiol Surg 2018; 13:1525-1538. [PMID: 29869321 PMCID: PMC6151276 DOI: 10.1007/s11548-018-1786-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 05/03/2018] [Indexed: 12/19/2022]
Abstract
PURPOSE The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. METHODS A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. RESULTS Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. CONCLUSIONS This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA.
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal.
| | - Matthew Toews
- École de Technologie Superieure, 1100 Notre-Dame St W, Montreal, QC, H3C 1K3, Canada
| | - Jie Luo
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba, Japan
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Pedro Teodoro
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, CHLN, Hospital de Santa Maria, Avenida Professor Egas Moniz, 1649-035, Lisbon, Portugal
| | - Jorge Martins
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Isomics, Inc., 55 Kirkland St, Cambridge, MA, 02138, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St., Boston, MA, 02115, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| |
Collapse
|
8
|
Xiao Y, Eikenes L, Reinertsen I, Rivaz H. Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 2018; 13:457-467. [DOI: 10.1007/s11548-017-1699-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 12/21/2017] [Indexed: 11/24/2022]
|
9
|
Morin F, Courtecuisse H, Reinertsen I, Le Lann F, Palombi O, Payan Y, Chabanas M. Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation. Med Image Anal 2017. [DOI: 10.1016/j.media.2017.06.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De Nigris D, Bériault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL. IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2016; 12:363-378. [DOI: 10.1007/s11548-016-1478-0] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
|
11
|
Sastry R, Bi WL, Pieper S, Frisken S, Kapur T, Wells W, Golby AJ. Applications of Ultrasound in the Resection of Brain Tumors. J Neuroimaging 2016; 27:5-15. [PMID: 27541694 DOI: 10.1111/jon.12382] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Revised: 07/04/2016] [Accepted: 07/05/2016] [Indexed: 12/23/2022] Open
Abstract
Neurosurgery makes use of preoperative imaging to visualize pathology, inform surgical planning, and evaluate the safety of selected approaches. The utility of preoperative imaging for neuronavigation, however, is diminished by the well-characterized phenomenon of brain shift, in which the brain deforms intraoperatively as a result of craniotomy, swelling, gravity, tumor resection, cerebrospinal fluid (CSF) drainage, and many other factors. As such, there is a need for updated intraoperative information that accurately reflects intraoperative conditions. Since 1982, intraoperative ultrasound has allowed neurosurgeons to craft and update operative plans without ionizing radiation exposure or major workflow interruption. Continued evolution of ultrasound technology since its introduction has resulted in superior imaging quality, smaller probes, and more seamless integration with neuronavigation systems. Furthermore, the introduction of related imaging modalities, such as 3-dimensional ultrasound, contrast-enhanced ultrasound, high-frequency ultrasound, and ultrasound elastography, has dramatically expanded the options available to the neurosurgeon intraoperatively. In the context of these advances, we review the current state, potential, and challenges of intraoperative ultrasound for brain tumor resection. We begin by evaluating these ultrasound technologies and their relative advantages and disadvantages. We then review three specific applications of these ultrasound technologies to brain tumor resection: (1) intraoperative navigation, (2) assessment of extent of resection, and (3) brain shift monitoring and compensation. We conclude by identifying opportunities for future directions in the development of ultrasound technologies.
Collapse
Affiliation(s)
- Rahul Sastry
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Wenya Linda Bi
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tina Kapur
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Alexandra J Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
12
|
Farnia P, Makkiabadi B, Ahmadian A, Alirezaie J. Curvelet based residual complexity objective function for non-rigid registration of pre-operative MRI with intra-operative ultrasound images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1167-1170. [PMID: 28268533 DOI: 10.1109/embc.2016.7590912] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Intra-operative ultrasound as an imaging based method has been recognized as an effective solution to compensate non rigid brain shift problem in recent years. Measuring brain shift requires registration of the pre-operative MRI images with the intra-operative ultrasound images which is a challenging task. In this study a novel hybrid method based on the matching echogenic structures such as sulci and tumor boundary in MRI with ultrasound images is proposed. The matching echogenic structures are achieved by optimizing the Residual Complexity (RC) in the curvelet domain. At the first step, the probabilistic map of the MR image is achieved and the residual image as the difference between this probabilistic map and intra-operative ultrasound is obtained. Then curvelet transform as a sparse function is used to minimize the complexity of residual image. The proposed method is a compromise between feature-based and intensity-based approaches. Evaluation was performed using 14 patients data set and the mean of registration error reached to 1.87 mm. This hybrid method based on RC improves accuracy of nonrigid multimodal image registration by 12.5% in a computational time compatible with clinical use.
Collapse
|
13
|
Jiang D, Shi Y, Yao D, Wang M, Song Z. miLBP: a robust and fast modality-independent 3D LBP for multimodal deformable registration. Int J Comput Assist Radiol Surg 2016; 11:997-1005. [PMID: 27250854 PMCID: PMC4893381 DOI: 10.1007/s11548-016-1407-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/31/2016] [Indexed: 05/29/2023]
Abstract
Purpose Computer-assisted intervention often depends on multimodal deformable registration to provide complementary information. However, multimodal deformable registration remains a challenging task. Methods This paper introduces a novel robust and fast modality-independent 3D binary descriptor, called miLBP, which integrates the principle of local self-similarity with a form of local binary pattern and can robustly extract the similar geometry features from 3D volumes across different modalities. miLBP is a bit string that can be computed by simply thresholding the voxel distance. Furthermore, the descriptor similarity can be evaluated efficiently using the Hamming distance. Results miLBP was compared to vector-valued self-similarity context (SSC) in artificial image and clinical settings. The results show that miLBP is more robust than SSC in extracting local geometry features across modalities and achieved higher registration accuracy in different registration scenarios. Furthermore, in the most challenging registration between preoperative magnetic resonance imaging and intra-operative ultrasound images, our approach significantly outperforms the state-of-the-art methods in terms of both accuracy (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2.15\pm 1.1 \hbox { mm}$$\end{document}2.15±1.1mm) and speed (29.2 s for one case). Conclusions Registration performance and speed indicate that miLBP has the potential of being applied to the time-sensitive intra-operative computer-assisted intervention.
Collapse
Affiliation(s)
- Dongsheng Jiang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Yonghong Shi
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Demin Yao
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Manning Wang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| | - Zhijian Song
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| |
Collapse
|
14
|
|
15
|
Co-registration and distortion correction of diffusion and anatomical images based on inverse contrast normalization. Neuroimage 2015; 115:269-80. [PMID: 25827811 DOI: 10.1016/j.neuroimage.2015.03.050] [Citation(s) in RCA: 69] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Revised: 03/17/2015] [Accepted: 03/19/2015] [Indexed: 01/31/2023] Open
Abstract
Diffusion MRI provides quantitative information about microstructural properties which can be useful in neuroimaging studies of the human brain. Echo planar imaging (EPI) sequences, which are frequently used for acquisition of diffusion images, are sensitive to inhomogeneities in the primary magnetic (B0) field that cause localized distortions in the reconstructed images. We describe and evaluate a new method for correction of susceptibility-induced distortion in diffusion images in the absence of an accurate B0 fieldmap. In our method, the distortion field is estimated using a constrained non-rigid registration between an undistorted T1-weighted anatomical image and one of the distorted EPI images from diffusion acquisition. Our registration framework is based on a new approach, INVERSION (Inverse contrast Normalization for VERy Simple registratION), which exploits the inverted contrast relationship between T1- and T2-weighted brain images to define a simple and robust similarity measure. We also describe how INVERSION can be used for rigid alignment of diffusion images and T1-weighted anatomical images. Our approach is evaluated with multiple in vivo datasets acquired with different acquisition parameters. Compared to other methods, INVERSION shows robust and consistent performance in rigid registration and shows improved alignment of diffusion and anatomical images relative to normalized mutual information for non-rigid distortion correction.
Collapse
|
16
|
Rivaz H, Chen SJS, Collins DL. Automatic deformable MR-ultrasound registration for image-guided neurosurgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:366-380. [PMID: 25248177 DOI: 10.1109/tmi.2014.2354352] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this work, we present a novel algorithm for registration of 3-D volumetric ultrasound (US) and MR using Robust PaTch-based cOrrelation Ratio (RaPTOR). RaPTOR computes local correlation ratio (CR) values on small patches and adds the CR values to form a global cost function. It is therefore invariant to large amounts of spatial intensity inhomogeneity. We also propose a novel outlier suppression technique based on the orientations of the RaPTOR gradients. Our deformation is modeled with free-form cubic B-splines. We analytically derive the derivatives of RaPTOR with respect to the transformation, i.e., the displacement of the B-spline nodes, and optimize RaPTOR using a stochastic gradient descent approach. RaPTOR is validated on MR and tracked US images of neurosurgery. Deformable registration of the US and MR images acquired, respectively, preoperation and postresection is of significant clinical significance, but challenging due to, among others, the large amount of missing correspondences between the two images. This work is also novel in that it performs automatic registration of this challenging dataset. To validate the results, we manually locate corresponding anatomical landmarks in the US and MR images of tumor resection in brain surgery. Compared to rigid registration based on the tracking system alone, RaPTOR reduces the mean initial mTRE over 13 patients from 5.9 to 2.9 mm, and the maximum initial TRE from 17.0 to 5.9 mm. Each volumetric registration using RaPTOR takes about 30 sec on a single CPU core. An important challenge in the field of medical image analysis is the shortage of publicly available dataset, which can both facilitate the advancement of new algorithms to clinical settings and provide a benchmark for comparison. To address this problem, we will make our manually located landmarks available online.
Collapse
|
17
|
Deformable registration of preoperative MR, pre-resection ultrasound, and post-resection ultrasound images of neurosurgery. Int J Comput Assist Radiol Surg 2014; 10:1017-28. [PMID: 25373447 DOI: 10.1007/s11548-014-1099-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 06/17/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE Sites that use ultrasound (US) in image-guided neurosurgery (IGNS) of brain tumors generally have three sets of imaging data: preoperative magnetic resonance (MR) image, pre-resection US, and post-resection US. The MR image is usually acquired days before the surgery, the pre-resection US is obtained after the craniotomy but before the resection, and finally, the post-resection US scan is performed after the resection of the tumor. The craniotomy and tumor resection both cause brain deformation, which significantly reduces the accuracy of the MR-US alignment. METHOD Three unknown transformations exist between the three sets of imaging data: MR to pre-resection US, pre- to post-resection US, and MR to post-resection US. We use two algorithms that we have recently developed to perform the first two registrations (i.e., MR to pre-resection US and pre- to post-resection US). Regarding the third registration (MR to post-resection US), we evaluate three strategies. The first method performs a registration between the MR and pre-resection US, and another registration between the pre- and post-resection US. It then composes the two transformations to register MR and post-resection US; we call this method compositional registration. The second method ignores the pre-resection US and directly registers the MR and post-resection US; we refer to this method as direct registration. The third method is a combination of the first and second: it uses the solution of the compositional registration as an initial solution for the direct registration method. We call this method group-wise registration. RESULTS We use data from 13 patients provided in the MNI BITE database for all of our analysis. Registration of MR and pre-resection US reduces the average of the mean target registration error (mTRE) from 4.1 to 2.4 mm. Registration of pre- and post-resection US reduces the average mTRE from 3.7 to 1.5 mm. Regarding the registration of MR and post-resection US, all three strategies reduce the mTRE. The initial average mTRE is 5.9 mm, which reduces to 3.3 mm with the compositional method, 2.9 mm with the direct technique, and 2.8 mm with the group-wise method. CONCLUSION Deformable registration of MR and pre- and post-resection US images significantly improves their alignment. Among the three methods proposed for registering the MR to post-resection US, the group-wise method gives the lowest TRE values. Since the running time of all registration algorithms is less than 2 min on one core of a CPU, they can be integrated into IGNS systems for interactive use during surgery.
Collapse
|
18
|
Sherwood V, Civale J, Rivens I, Collins DJ, Leach MO, ter Haar GR. Development of a hybrid magnetic resonance and ultrasound imaging system. BIOMED RESEARCH INTERNATIONAL 2014; 2014:914347. [PMID: 25177702 PMCID: PMC4142177 DOI: 10.1155/2014/914347] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 07/11/2014] [Accepted: 07/16/2014] [Indexed: 12/29/2022]
Abstract
A system which allows magnetic resonance (MR) and ultrasound (US) image data to be acquired simultaneously has been developed. B-mode and Doppler US were performed inside the bore of a clinical 1.5 T MRI scanner using a clinical 1-4 MHz US transducer with an 8-metre cable. Susceptibility artefacts and RF noise were introduced into MR images by the US imaging system. RF noise was minimised by using aluminium foil to shield the transducer. A study of MR and B-mode US image signal-to-noise ratio (SNR) as a function of transducer-phantom separation was performed using a gel phantom. This revealed that a 4 cm separation between the phantom surface and the transducer was sufficient to minimise the effect of the susceptibility artefact in MR images. MR-US imaging was demonstrated in vivo with the aid of a 2 mm VeroWhite 3D-printed spherical target placed over the thigh muscle of a rat. The target allowed single-point registration of MR and US images in the axial plane to be performed. The system was subsequently demonstrated as a tool for the targeting and visualisation of high intensity focused ultrasound exposure in the rat thigh muscle.
Collapse
Affiliation(s)
- Victoria Sherwood
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| | - John Civale
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| | - Ian Rivens
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| | - David J. Collins
- Department of Clinical Magnetic Resonance, CRUK and EPSRC Cancer Imaging Centre, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| | - Martin O. Leach
- Department of Clinical Magnetic Resonance, CRUK and EPSRC Cancer Imaging Centre, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| | - Gail R. ter Haar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, 123 Old Brompton Road, London SW7 3RP, UK
| |
Collapse
|
19
|
Brain-shift compensation by non-rigid registration of intra-operative ultrasound images with preoperative MR images based on residual complexity. Int J Comput Assist Radiol Surg 2014; 10:555-62. [DOI: 10.1007/s11548-014-1098-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2014] [Accepted: 06/16/2014] [Indexed: 10/25/2022]
|
20
|
Rivaz H, Karimaghaloo Z, Fonov VS, Collins DL. Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:708-725. [PMID: 24595344 DOI: 10.1109/tmi.2013.2294630] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Mutual information (MI) quantifies the information that is shared between two random variables and has been widely used as a similarity metric for multi-modal and uni-modal image registration. A drawback of MI is that it only takes into account the intensity values of corresponding pixels and not of neighborhoods. Therefore, it treats images as "bag of words" and the contextual information is lost. In this work, we present Contextual Conditioned Mutual Information (CoCoMI), which conditions MI estimation on similar structures. Our rationale is that it is more likely for similar structures to undergo similar intensity transformations. The contextual analysis is performed on one of the images offline. Therefore, CoCoMI does not significantly change the registration time. We use CoCoMI as the similarity measure in a regularized cost function with a B-spline deformation field and efficiently optimize the cost function using a stochastic gradient descent method. We show that compared to the state of the art local MI based similarity metrics, CoCoMI does not distort images to enforce erroneous identical intensity transformations for different image structures. We further present the results on nonrigid registration of ultrasound (US) and magnetic resonance (MR) patient data from image-guided neurosurgery trials performed in our institute and publicly available in the BITE dataset. We show that CoCoMI performs significantly better than the state of the art similarity metrics in US to MR registration. It reduces the average mTRE over 13 patients from 4.12 mm to 2.35 mm, and the maximum mTRE from 9.38 mm to 3.22 mm.
Collapse
|
21
|
Kuklisova-Murgasova M, Cifor A, Napolitano R, Papageorghiou A, Quaghebeur G, Rutherford MA, Hajnal JV, Noble JA, Schnabel JA. Registration of 3D fetal neurosonography and MRI. Med Image Anal 2013; 17:1137-50. [PMID: 23969169 PMCID: PMC3807810 DOI: 10.1016/j.media.2013.07.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Revised: 07/01/2013] [Accepted: 07/15/2013] [Indexed: 11/25/2022]
Abstract
We propose a method for registration of 3D fetal brain ultrasound with a reconstructed magnetic resonance fetal brain volume. This method, for the first time, allows the alignment of models of the fetal brain built from magnetic resonance images with 3D fetal brain ultrasound, opening possibilities to develop new, prior information based image analysis methods for 3D fetal neurosonography. The reconstructed magnetic resonance volume is first segmented using a probabilistic atlas and a pseudo ultrasound image volume is simulated from the segmentation. This pseudo ultrasound image is then affinely aligned with clinical ultrasound fetal brain volumes using a robust block-matching approach that can deal with intensity artefacts and missing features in the ultrasound images. A qualitative and quantitative evaluation demonstrates good performance of the method for our application, in comparison with other tested approaches. The intensity average of 27 ultrasound images co-aligned with the pseudo ultrasound template shows good correlation with anatomy of the fetal brain as seen in the reconstructed magnetic resonance image.
Collapse
Affiliation(s)
- Maria Kuklisova-Murgasova
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK; Department of Biomedical Engineering, King's College London, UK; Centre for the Developing Brain, King's College London, UK.
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Hybrid Ultrasound/Magnetic Resonance Simultaneous Acquisition and Image Fusion for Motion Monitoring in the Upper Abdomen. Invest Radiol 2013; 48:333-40. [DOI: 10.1097/rli.0b013e31828236c3] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
23
|
De Nigris D, Collins DL, Arbel T. Fast rigid registration of pre-operative magnetic resonance images to intra-operative ultrasound for neurosurgery based on high confidence gradient orientations. Int J Comput Assist Radiol Surg 2013; 8:649-61. [PMID: 23515899 DOI: 10.1007/s11548-013-0826-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Accepted: 03/01/2013] [Indexed: 10/27/2022]
Abstract
PURPOSE We present a novel approach for the registration of pre-operative magnetic resonance images to intra-operative ultrasound images for the context of image-guided neurosurgery. METHOD Our technique relies on the maximization of gradient orientation alignment in a reduced set of high confidence locations of interest and allows for fast, accurate, and robust registration. Performance is compared with multiple state-of-the-art techniques including conventional intensity-based multi-modal registration strategies, as well as other context-specific approaches. All methods were evaluated on fourteen clinical neurosurgical cases with brain tumors, including low-grade and high-grade gliomas, from the publicly available MNI BITE dataset. Registration accuracy of each method is evaluated as the mean distance between homologous landmarks identified by two or three experts. We provide an analysis of the landmarks used and expose some of the limitations in validation brought forward by expert disagreement and uncertainty in identifying corresponding points. RESULTS The proposed approach yields a mean error of 2.57 mm across all cases (the smallest among all evaluated techniques). Additionally, it is the only evaluated technique that resolves all cases with a mean distance of less than 1 mm larger than the theoretical minimal mean distance when using a rigid transformation. CONCLUSION Finally, our proposed method provides reduced processing times with an average registration time of 0.76 s in a GPU-based implementation, thereby facilitating its integration into the operating room.
Collapse
Affiliation(s)
- Dante De Nigris
- Centre for Intelligent Machines, McGill University, Montreal, QC, H3A 0E9, Canada.
| | | | | |
Collapse
|
24
|
Mercier L, Del Maestro RF, Petrecca K, Araujo D, Haegelen C, Collins DL. Online database of clinical MR and ultrasound images of brain tumors. Med Phys 2012; 39:3253-61. [DOI: 10.1118/1.4709600] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
25
|
Qu X, Azuma T, Liang JT, Nakajima Y. Average sound speed estimation using speckle analysis of medical ultrasound data. Int J Comput Assist Radiol Surg 2012; 7:891-9. [PMID: 22544670 DOI: 10.1007/s11548-012-0690-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2012] [Accepted: 04/10/2012] [Indexed: 11/24/2022]
Abstract
PURPOSE Most ultrasound imaging systems assume a pre-determined sound propagation speed for imaging. However, a mismatch between assumed and real sound speeds can lead to spatial shift and defocus of ultrasound image, which may limit the applicability of ultrasound imaging. The estimation of real sound speed is important for improving positioning accuracy and focus quality of ultrasound image. METHOD A novel method using speckle analysis of ultrasound image is proposed for average sound speed estimation. Firstly, dynamic receive beam forming technology is employed to form ultrasound images. These ultrasound images are formed by same pre-beam formed radio frequency data but using different assumed sound speeds. Secondly, an improved speckle analysis method is proposed to evaluate focus quality of these ultrasound images. Thirdly, an iteration strategy is employed to locate the desired sound speed that corresponds to the best focus quality image. RESULTS For quantitative evaluation, a group of ultrasound data with 20 different structure patterns is simulated. The comparison of estimated and simulated sound speeds shows speed estimation errors to be -0.7 ± 2.54 m/s and -1.30 ± 5.15 m/s for ultrasound data obtained by 128- and 64-active individual elements linear arrays, respectively. Furthermore, we validate our method via phantom experiments. The sound speed estimation error is -1.52 ± 8.81 m/s. CONCLUSION Quantitative evaluation proves that proposed method can estimate average sound speed accurately using single transducer with single scan.
Collapse
Affiliation(s)
- Xiaolei Qu
- Department of Bioengineering, The University of Tokyo, Yayoi 2-11-16, Bunkyo, Tokyo, 113-8656, Japan.
| | | | | | | |
Collapse
|
26
|
Fast and Robust Registration Based on Gradient Orientations: Case Study Matching Intra-operative Ultrasound to Pre-operative MRI in Neurosurgery. ACTA ACUST UNITED AC 2012. [DOI: 10.1007/978-3-642-30618-1_13] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|