1
|
Rahmani M, Moghaddasi H, Pour-Rashidi A, Ahmadian A, Najafzadeh E, Farnia P. D 2BGAN: Dual Discriminator Bayesian Generative Adversarial Network for Deformable MR-Ultrasound Registration Applied to Brain Shift Compensation. Diagnostics (Basel) 2024; 14:1319. [PMID: 39001209 PMCID: PMC11240784 DOI: 10.3390/diagnostics14131319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/30/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.
Collapse
Affiliation(s)
- Mahdiyeh Rahmani
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Hadis Moghaddasi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ahmad Pour-Rashidi
- Department of Neurosurgery, Sina Hospital, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 11367469111, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ebrahim Najafzadeh
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran 1417466191, Iran
- Department of Molecular Imaging, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran 1449614535, Iran
| | - Parastoo Farnia
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| |
Collapse
|
2
|
Guo H, Xu X, Song X, Xu S, Chao H, Myers J, Turkbey B, Pinto PA, Wood BJ, Yan P. Ultrasound Frame-to-Volume Registration via Deep Learning for Interventional Guidance. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1016-1025. [PMID: 37015418 PMCID: PMC10502768 DOI: 10.1109/tuffc.2022.3229903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Fusing intraoperative 2-D ultrasound (US) frames with preoperative 3-D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2-D/3-D data. To overcome these challenges, we propose a novel US frame-to-volume registration (FVReg) pipeline to bridge the dimensionality gap between 2-D US frames and 3-D US volume. The developed pipeline is implemented using deep neural networks, which are fully automatic without requiring external tracking devices. The framework consists of three major components, including one) a frame-to-frame registration network (Frame2Frame) that estimates the current frame's 3-D spatial position based on previous video context, two) a frame-to-slice correction network (Frame2Slice) adjusting the estimated frame position using the 3-D US volumetric information, and three) a similarity filtering (SF) mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2-D-US to 3-D-MR fusion navigation tasks. The proposed FVReg achieved an average target navigation error of 1.93 mm at 5-14 fps. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.
Collapse
|
3
|
Mazzucchi E, Hiepe P, Langhof M, La Rocca G, Pignotti F, Rinaldi P, Sabatino G. Automatic rigid image Fusion of preoperative MR and intraoperative US acquired after craniotomy. Cancer Imaging 2023; 23:37. [PMID: 37055790 PMCID: PMC10099637 DOI: 10.1186/s40644-023-00554-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/05/2023] [Indexed: 04/15/2023] Open
Abstract
BACKGROUND Neuronavigation of preoperative MRI is limited by several errors. Intraoperative ultrasound (iUS) with navigated probes that provide automatic superposition of pre-operative MRI and iUS and three-dimensional iUS reconstruction may overcome some of these limitations. Aim of the present study is to verify the accuracy of an automatic MRI - iUS fusion algorithm to improve MR-based neuronavigation accuracy. METHODS An algorithm using Linear Correlation of Linear Combination (LC2)-based similarity metric has been retrospectively evaluated for twelve datasets acquired in patients with brain tumor. A series of landmarks were defined both in MRI and iUS scans. The Target Registration Error (TRE) was determined for each pair of landmarks before and after the automatic Rigid Image Fusion (RIF). The algorithm has been tested on two conditions of the initial image alignment: registration-based fusion (RBF), as given by the navigated ultrasound probe, and different simulated course alignments during convergence test. RESULTS Except for one case RIF was successfully applied in all patients considering the RBF as initial alignment. Here, mean TRE after RBF was significantly reduced from 4.03 (± 1.40) mm to (2.08 ± 0.96 mm) (p = 0.002), after RIF. For convergence test, the mean TRE value after initial perturbations was 8.82 (± 0.23) mm which has been reduced to a mean TRE of 2.64 (± 1.20) mm after RIF (p < 0.001). CONCLUSIONS The integration of an automatic image fusion method for co-registration of pre-operative MRI and iUS data may improve the accuracy in MR-based neuronavigation.
Collapse
Affiliation(s)
- Edoardo Mazzucchi
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy.
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy.
| | | | | | - Giuseppe La Rocca
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | - Fabrizio Pignotti
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | | | - Giovanni Sabatino
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| |
Collapse
|
4
|
Automatic 3D MRI-Ultrasound Registration for Image Guided Arthroscopy. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Registration of partial view intra-operative ultrasound (US) to pre-operative MRI is an essential step in image-guided minimally invasive surgery. In this paper, we present an automatic, landmark-free 3D multimodal registration of pre-operative MRI to 4D US (high-refresh-rate 3D-US) for enabling guidance in knee arthroscopy. We focus on the problem of initializing registration in the case of partial views. The proposed method utilizes a pre-initialization step of using the automatically segmented structures from both modalities to achieve a global geometric initialization. This is followed by computing distance maps of the procured segmentations for registration in the distance space. Following that, the final local refinement between the MRI-US volumes is achieved using the LC2 (Linear correlation of linear combination) metric. The method is evaluated on 11 cases spanning six subjects, with four levels of knee flexion. A best-case error of 1.41 mm and 2.34∘ and an average registration error of 3.45 mm and 7.76∘ is achieved in translation and rotation, respectively. An inter-observer variability study is performed, and a mean difference of 4.41 mm and 7.77∘ is reported. The errors obtained through the developed registration algorithm and inter-observer difference values are found to be comparable. We have shown that the proposed algorithm is simple, robust and allows for the automatic global registration of 3D US and MRI that can enable US based image guidance in minimally invasive procedures.
Collapse
|
5
|
Farnia P, Makkiabadi B, Alimohamadi M, Najafzadeh E, Basij M, Yan Y, Mehrmohammadi M, Ahmadian A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. SENSORS 2022; 22:s22062399. [PMID: 35336570 PMCID: PMC8954240 DOI: 10.3390/s22062399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 12/13/2022]
Abstract
Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Bahador Makkiabadi
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maysam Alimohamadi
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran;
| | - Ebrahim Najafzadeh
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Yan Yan
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Mohammad Mehrmohammadi
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
- Barbara Ann Karmanos Cancer Institute, Detroit, MI 48201, USA
- Correspondence: (M.M.); (A.A.)
| | - Alireza Ahmadian
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
- Correspondence: (M.M.); (A.A.)
| |
Collapse
|
6
|
Farnia P, Mohammadi M, Najafzadeh E, Alimohamadi M, Makkiabadi B, Ahmadian A. High-quality photoacoustic image reconstruction based on deep convolutional neural network: towards intra-operative photoacoustic imaging. Biomed Phys Eng Express 2020; 6:045019. [PMID: 33444279 DOI: 10.1088/2057-1976/ab9a10] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The use of intra-operative imaging system as an intervention solution to provide more accurate localization of complicated structures has become a necessity during the neurosurgery. However, due to the limitations of conventional imaging systems, high-quality real-time intra-operative imaging remains as a challenging problem. Meanwhile, photoacoustic imaging has appeared so promising to provide images of crucial structures such as blood vessels and microvasculature of tumors. To achieve high-quality photoacoustic images of vessels regarding the artifacts caused by the incomplete data, we proposed an approach based on the combination of time-reversal (TR) and deep learning methods. The proposed method applies a TR method in the first layer of the network which is followed by the convolutional neural network with weights adjusted to a set of simulated training data for the other layers to estimate artifact-free photoacoustic images. It was evaluated using a generated synthetic database of vessels. The mean of signal to noise ratio (SNR), peak SNR, structural similarity index, and edge preservation index for the test data were reached 14.6 dB, 35.3 dB, 0.97 and 0.90, respectively. As our results proved, by using the lower number of detectors and consequently the lower data acquisition time, our approach outperforms the TR algorithm in all criteria in a computational time compatible with clinical use.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran, Iran. Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | | | | | | |
Collapse
|
7
|
Xiao Y, Rivaz H, Chabanas M, Fortin M, Machado I, Ou Y, Heinrich MP, Schnabel JA. Evaluation of MRI to Ultrasound Registration Methods for Brain Shift Correction: The CuRIOUS2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:777-786. [PMID: 31425023 PMCID: PMC7611407 DOI: 10.1109/tmi.2019.2935060] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
Collapse
Affiliation(s)
- Yiming Xiao
- the Robarts Research Institute, Western University, London, ON N6A 5B7, Canada
| | - Hassan Rivaz
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Matthieu Chabanas
- the School of Computer Science and Applied Mathematics, Grenoble Institute of Technology, 38031 Grenoble, France, and also with the TIMC-IMAG Laboratory, University of Grenoble Alpes, 38400 Grenoble, France
| | - Maryse Fortin
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Ines Machado
- the Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Yangming Ou
- the Department of Pediatrics and Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Mattias P. Heinrich
- the Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Julia A. Schnabel
- the School of Biomedical Engineering and Imaging Sciences, King’s College London, London WC2R 2LS, U.K
| |
Collapse
|
8
|
Ahmadi SA, Bötzel K, Levin J, Maiostre J, Klein T, Wein W, Rozanski V, Dietrich O, Ertl-Wagner B, Navab N, Plate A. Analyzing the co-localization of substantia nigra hyper-echogenicities and iron accumulation in Parkinson's disease: A multi-modal atlas study with transcranial ultrasound and MRI. NEUROIMAGE-CLINICAL 2020; 26:102185. [PMID: 32050136 PMCID: PMC7013333 DOI: 10.1016/j.nicl.2020.102185] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 01/12/2020] [Accepted: 01/14/2020] [Indexed: 12/23/2022]
Abstract
Volumetric 3D analysis of hyper-echogenicities from transcranial ultrasound (TCS). First multi-modal analysis of TCS and QSM-MRI in Parkinson's disease. Computations of TCS-MRI registration and a novel multi-modal anatomical template. TCS hyper-echogenicities are co-localized with QSM iron accumulations. Co-localizations occur in the SNc and VTA, but nowhere else in the midbrain.
Background Transcranial B-mode sonography (TCS) can detect hyperechogenic speckles in the area of the substantia nigra (SN) in Parkinson's disease (PD). These speckles correlate with iron accumulation in the SN tissue, but an exact volumetric localization in and around the SN is still unknown. Areas of increased iron content in brain tissue can be detected in vivo with magnetic resonance imaging, using quantitative susceptibility mapping (QSM). Methods In this work, we i) acquire, co-register and transform TCS and QSM imaging from a cohort of 23 PD patients and 27 healthy control subjects into a normalized atlas template space and ii) analyze and compare the 3D spatial distributions of iron accumulation in the midbrain, as detected by a signal increase (TCS+ and QSM+) in both modalities. Results We achieved sufficiently accurate intra-modal target registration errors (TRE<1 mm) for all MRI volumes and multi-modal TCS-MRI co-localization (TRE<4 mm) for 66.7% of TCS scans. In the caudal part of the midbrain, enlarged TCS+ and QSM+ areas were located within the SN pars compacta in PD patients in comparison to healthy controls. More cranially, overlapping TCS+ and QSM+ areas in PD subjects were found in the area of the ventral tegmental area (VTA). Conclusion Our findings are concordant with several QSM-based studies on iron-related alterations in the area SN pars compacta. They substantiate that TCS+ is an indicator of iron accumulation in Parkinson's disease within and in the vicinity of the SN. Furthermore, they are in favor of an involvement of the VTA and thereby the mesolimbic system in Parkinson's disease.
Collapse
Affiliation(s)
- Seyed-Ahmad Ahmadi
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany; German Center for Vertigo and Balance Disorders (DSGZ), Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany; Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich, Boltzmannstr. 3, Garching 85748, Germany
| | - Kai Bötzel
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | - Johannes Levin
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | - Juliana Maiostre
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | | | - Wolfgang Wein
- ImFusion GmbH, Agnes-Pockels-Bogen 1, München 80992, Germany
| | | | - Olaf Dietrich
- Department of Radiology, Ludwig-Maximilians University, Marchioninistr. 15, Munich 81377, Germany
| | - Birgit Ertl-Wagner
- Department of Radiology, Ludwig-Maximilians University, Marchioninistr. 15, Munich 81377, Germany; The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario M5G 1 × 8, Canada
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich, Boltzmannstr. 3, Garching 85748, Germany
| | - Annika Plate
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany.
| |
Collapse
|
9
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
10
|
Frisken S, Luo M, Juvekar P, Bunevicius A, Machado I, Unadkat P, Bertotti MM, Toews M, Wells WM, Miga MI, Golby AJ. A comparison of thin-plate spline deformation and finite element modeling to compensate for brain shift during tumor resection. Int J Comput Assist Radiol Surg 2019; 15:75-85. [PMID: 31444624 DOI: 10.1007/s11548-019-02057-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 08/14/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Collapse
Affiliation(s)
- Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.
| | - Ma Luo
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Parikshit Juvekar
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Adomas Bunevicius
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Ines Machado
- Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, Portugal
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Melina M Bertotti
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Matt Toews
- Département de Génie des Systems, Ecole de Technologie Superieure, Montreal, Canada
| | - William M Wells
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA.,Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Vanderbilt Institute for Surgery and Engineering, Vanderbilt University, Nashville, TN, USA
| | - Alexandra J Golby
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
11
|
Automatic and efficient MRI-US segmentations for improving intraoperative image fusion in image-guided neurosurgery. NEUROIMAGE-CLINICAL 2019; 22:101766. [PMID: 30901714 PMCID: PMC6425116 DOI: 10.1016/j.nicl.2019.101766] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 01/20/2019] [Accepted: 03/10/2019] [Indexed: 11/24/2022]
Abstract
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s. We demonstrate that our segmentation-based registration increases accuracy, robustness, and speed of multi-modal image registration of preoperative MRI and intraoperative ultrasound images for improving intraoperative image guided neurosurgery. For this we provide a fast and efficient segmentation of central anatomical structures of the perifalcine region on ultrasound images. We demonstrate the advantages of our method by comparing the results of our segmentation-based registration with the initial registration provided by the navigation system and with an intensity-based registration approach.
Collapse
|
12
|
Xiao Y, Eikenes L, Reinertsen I, Rivaz H. Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 2018; 13:457-467. [DOI: 10.1007/s11548-017-1699-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 12/21/2017] [Indexed: 11/24/2022]
|
13
|
Geometric modeling of hepatic arteries in 3D ultrasound with unsupervised MRA fusion during liver interventions. Int J Comput Assist Radiol Surg 2017; 12:961-972. [PMID: 28271356 DOI: 10.1007/s11548-017-1550-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 02/27/2017] [Indexed: 10/20/2022]
Abstract
PURPOSE Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. METHODS The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. RESULTS 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. CONCLUSION The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.
Collapse
|
14
|
US/MRI fusion with new optical tracking and marker approach for interventional procedures inside the MRI suite. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2016. [DOI: 10.1515/cdbme-2016-0101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Interventional MRI in closed bore high-field systems is a challenge due to limited space and the need of dedicated MRI compatible equipment and tools. A possible solution could be to perform an ultrasound procedure for guidance of the therapy tools outside the bore, but still on the MRI patient bed. That could track and subsequently combine the superior images of MRI with the real-time features of ultrasound. Conventional optical tracking systems suffer from line of sight issues and electromagnetic tracking does not perform well in the presence of magnetic fields. Hence, to overcome these issues a new optical tracking system called inside-out tracking is used. In this approach, the camera is directly attached to the US probe and the markers are placed onto the patient to achieve the location information of the US slice. The evaluation of our novel system of framed fusion markers can easily be adapted to various imaging modalities without losing image registration. To confirm this evaluation, phantom studies with MRI and US imaging were carried out using a point-registration algorithm along with a similarity measure for fusion. In the inside-out system approach, image registration was found to yield an accuracy of upto 4 mm, depending on the imaging modality and the employed marker arrangement and with that provides an accuracy that cannot be easily achieved by combining pre-operative MRI with live ultrasound.
Collapse
|
15
|
Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De Nigris D, Bériault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL. IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2016; 12:363-378. [DOI: 10.1007/s11548-016-1478-0] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
|
16
|
Sastry R, Bi WL, Pieper S, Frisken S, Kapur T, Wells W, Golby AJ. Applications of Ultrasound in the Resection of Brain Tumors. J Neuroimaging 2016; 27:5-15. [PMID: 27541694 DOI: 10.1111/jon.12382] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Revised: 07/04/2016] [Accepted: 07/05/2016] [Indexed: 12/23/2022] Open
Abstract
Neurosurgery makes use of preoperative imaging to visualize pathology, inform surgical planning, and evaluate the safety of selected approaches. The utility of preoperative imaging for neuronavigation, however, is diminished by the well-characterized phenomenon of brain shift, in which the brain deforms intraoperatively as a result of craniotomy, swelling, gravity, tumor resection, cerebrospinal fluid (CSF) drainage, and many other factors. As such, there is a need for updated intraoperative information that accurately reflects intraoperative conditions. Since 1982, intraoperative ultrasound has allowed neurosurgeons to craft and update operative plans without ionizing radiation exposure or major workflow interruption. Continued evolution of ultrasound technology since its introduction has resulted in superior imaging quality, smaller probes, and more seamless integration with neuronavigation systems. Furthermore, the introduction of related imaging modalities, such as 3-dimensional ultrasound, contrast-enhanced ultrasound, high-frequency ultrasound, and ultrasound elastography, has dramatically expanded the options available to the neurosurgeon intraoperatively. In the context of these advances, we review the current state, potential, and challenges of intraoperative ultrasound for brain tumor resection. We begin by evaluating these ultrasound technologies and their relative advantages and disadvantages. We then review three specific applications of these ultrasound technologies to brain tumor resection: (1) intraoperative navigation, (2) assessment of extent of resection, and (3) brain shift monitoring and compensation. We conclude by identifying opportunities for future directions in the development of ultrasound technologies.
Collapse
Affiliation(s)
- Rahul Sastry
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Wenya Linda Bi
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tina Kapur
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Alexandra J Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
17
|
Farnia P, Makkiabadi B, Ahmadian A, Alirezaie J. Curvelet based residual complexity objective function for non-rigid registration of pre-operative MRI with intra-operative ultrasound images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1167-1170. [PMID: 28268533 DOI: 10.1109/embc.2016.7590912] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Intra-operative ultrasound as an imaging based method has been recognized as an effective solution to compensate non rigid brain shift problem in recent years. Measuring brain shift requires registration of the pre-operative MRI images with the intra-operative ultrasound images which is a challenging task. In this study a novel hybrid method based on the matching echogenic structures such as sulci and tumor boundary in MRI with ultrasound images is proposed. The matching echogenic structures are achieved by optimizing the Residual Complexity (RC) in the curvelet domain. At the first step, the probabilistic map of the MR image is achieved and the residual image as the difference between this probabilistic map and intra-operative ultrasound is obtained. Then curvelet transform as a sparse function is used to minimize the complexity of residual image. The proposed method is a compromise between feature-based and intensity-based approaches. Evaluation was performed using 14 patients data set and the mean of registration error reached to 1.87 mm. This hybrid method based on RC improves accuracy of nonrigid multimodal image registration by 12.5% in a computational time compatible with clinical use.
Collapse
|
18
|
Jiang D, Shi Y, Yao D, Wang M, Song Z. miLBP: a robust and fast modality-independent 3D LBP for multimodal deformable registration. Int J Comput Assist Radiol Surg 2016; 11:997-1005. [PMID: 27250854 PMCID: PMC4893381 DOI: 10.1007/s11548-016-1407-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/31/2016] [Indexed: 05/29/2023]
Abstract
Purpose Computer-assisted intervention often depends on multimodal deformable registration to provide complementary information. However, multimodal deformable registration remains a challenging task. Methods This paper introduces a novel robust and fast modality-independent 3D binary descriptor, called miLBP, which integrates the principle of local self-similarity with a form of local binary pattern and can robustly extract the similar geometry features from 3D volumes across different modalities. miLBP is a bit string that can be computed by simply thresholding the voxel distance. Furthermore, the descriptor similarity can be evaluated efficiently using the Hamming distance. Results miLBP was compared to vector-valued self-similarity context (SSC) in artificial image and clinical settings. The results show that miLBP is more robust than SSC in extracting local geometry features across modalities and achieved higher registration accuracy in different registration scenarios. Furthermore, in the most challenging registration between preoperative magnetic resonance imaging and intra-operative ultrasound images, our approach significantly outperforms the state-of-the-art methods in terms of both accuracy (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2.15\pm 1.1 \hbox { mm}$$\end{document}2.15±1.1mm) and speed (29.2 s for one case). Conclusions Registration performance and speed indicate that miLBP has the potential of being applied to the time-sensitive intra-operative computer-assisted intervention.
Collapse
Affiliation(s)
- Dongsheng Jiang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Yonghong Shi
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Demin Yao
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Manning Wang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| | - Zhijian Song
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| |
Collapse
|
19
|
Kojcev R, Fuerst B, Zettinig O, Fotouhi J, Lee SC, Frisch B, Taylor R, Sinibaldi E, Navab N. Dual-robot ultrasound-guided needle placement: closing the planning-imaging-action loop. Int J Comput Assist Radiol Surg 2016; 11:1173-81. [DOI: 10.1007/s11548-016-1408-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/31/2016] [Indexed: 10/21/2022]
|
20
|
Song Y, Totz J, Thompson S, Johnsen S, Barratt D, Schneider C, Gurusamy K, Davidson B, Ourselin S, Hawkes D, Clarkson MJ. Locally rigid, vessel-based registration for laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2015; 10:1951-61. [PMID: 26092658 PMCID: PMC4642598 DOI: 10.1007/s11548-015-1236-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 05/30/2015] [Indexed: 12/05/2022]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. METHODS We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. RESULTS Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. CONCLUSION In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future.
Collapse
Affiliation(s)
- Yi Song
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| | - Johannes Totz
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Steve Thompson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Stian Johnsen
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Dean Barratt
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Brian Davidson
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - David Hawkes
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| |
Collapse
|
21
|
Askeland C, Solberg OV, Bakeng JBL, Reinertsen I, Tangen GA, Hofstad EF, Iversen DH, Våpenstad C, Selbekk T, Langø T, Hernes TAN, Olav Leira H, Unsgård G, Lindseth F. CustusX: an open-source research platform for image-guided therapy. Int J Comput Assist Radiol Surg 2015; 11:505-19. [PMID: 26410841 PMCID: PMC4819973 DOI: 10.1007/s11548-015-1292-0] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 08/31/2015] [Indexed: 12/14/2022]
Abstract
Purpose CustusX is an image-guided therapy (IGT) research platform dedicated to intraoperative navigation and ultrasound imaging. In this paper, we present CustusX as a robust, accurate, and extensible platform with full access to data and algorithms and show examples of application in technological and clinical IGT research. Methods CustusX has been developed continuously for more than 15 years based on requirements from clinical and technological researchers within the framework of a well-defined software quality process. The platform was designed as a layered architecture with plugins based on the CTK/OSGi framework, a superbuild that manages dependencies and features supporting the IGT workflow. We describe the use of the system in several different clinical settings and characterize major aspects of the system such as accuracy, frame rate, and latency. Results The validation experiments show a navigation system accuracy of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$<$$\end{document}<1.1 mm, a frame rate of 20 fps, and latency of 285 ms for a typical setup. The current platform is extensible, user-friendly and has a streamlined architecture and quality process. CustusX has successfully been used for IGT research in neurosurgery, laparoscopic surgery, vascular surgery, and bronchoscopy. Conclusions CustusX is now a mature research platform for intraoperative navigation and ultrasound imaging and is ready for use by the IGT research community. CustusX is open-source and freely available at http://www.custusx.org.
Collapse
Affiliation(s)
- Christian Askeland
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway. .,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway.
| | - Ole Vegard Solberg
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | | | - Ingerid Reinertsen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | - Geir Arne Tangen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | | | - Daniel Høyer Iversen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Cecilie Våpenstad
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Tormod Selbekk
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Thomas Langø
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Toril A Nagelhus Hernes
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Håkon Olav Leira
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Geirmund Unsgård
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Frank Lindseth
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
22
|
Farnia P, Ahmadian A, Shabanian T, Serej ND, Alirezaie J. A hybrid method for non-rigid registration of intra-operative ultrasound images with pre-operative MR images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:5562-5. [PMID: 25571255 DOI: 10.1109/embc.2014.6944887] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years intra-operative ultrasound images have been used for many procedures in neurosurgery. The registration of intra-operative ultrasound images with preoperative magnetic resonance images is still a challenging problem. In this study a new hybrid method based on residual complexity is proposed for this problem. A new two stages method based on the matching echogenic structures such as sulci is achieved by optimizing the residual complexity (RC) value with quantized coefficients between the ultrasound image and the probabilistic map of MR image. The proposed method is a compromise between feature-based and intensity-based approaches. The evaluation is performed on both a brain phantom and patient data set. The results of the phantom data set confirmed that the proposed method outperforms the accuracy of conventional RC by 39%. Also the mean of fiducial registration errors reached to 1.45, 1.94 mm when the method was applied on phantom and clinical data set, respectively. This hybrid method based on RC enables non-rigid multimodal image registration in a computational time compatible with clinical use as well as being accurate.
Collapse
|
23
|
Abdolghaffar M, Ahmadian A, Ayoobi N, Farnia P, Shabanian T, Shafiei N, Alirezaie J. A shape based rotation invariant method for ultrasound-MR image registration: A phantom study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:5566-9. [PMID: 25571256 DOI: 10.1109/embc.2014.6944888] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this work, a new shape based method to improve the accuracy of Brain Ultrasound-MRI image registration is proposed. The method is based on modified Shape Context (SC) descriptor in combination with CPD algorithm. An extensive experiment was carried out to evaluate the robustness of this method against different initialization conditions. As the results prove, the overall performance of the proposed algorithm outperforms both SC and CPD methods. In order to have control over the registration procedure, we simulated the deformations, missing points and outliers according to our Phantom MRI images.
Collapse
|
24
|
Brain-shift compensation by non-rigid registration of intra-operative ultrasound images with preoperative MR images based on residual complexity. Int J Comput Assist Radiol Surg 2014; 10:555-62. [DOI: 10.1007/s11548-014-1098-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2014] [Accepted: 06/16/2014] [Indexed: 10/25/2022]
|
25
|
Self-similarity weighted mutual information: A new nonrigid image registration metric. Med Image Anal 2014; 18:343-58. [DOI: 10.1016/j.media.2013.12.003] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 10/07/2013] [Accepted: 12/07/2013] [Indexed: 11/19/2022]
|