1
|
Rahmani M, Moghaddasi H, Pour-Rashidi A, Ahmadian A, Najafzadeh E, Farnia P. D 2BGAN: Dual Discriminator Bayesian Generative Adversarial Network for Deformable MR-Ultrasound Registration Applied to Brain Shift Compensation. Diagnostics (Basel) 2024; 14:1319. [PMID: 39001209 PMCID: PMC11240784 DOI: 10.3390/diagnostics14131319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/30/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.
Collapse
Affiliation(s)
- Mahdiyeh Rahmani
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Hadis Moghaddasi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ahmad Pour-Rashidi
- Department of Neurosurgery, Sina Hospital, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 11367469111, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ebrahim Najafzadeh
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran 1417466191, Iran
- Department of Molecular Imaging, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran 1449614535, Iran
| | - Parastoo Farnia
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| |
Collapse
|
2
|
Juvekar P, Dorent R, Kögl F, Torio E, Barr C, Rigolo L, Galvin C, Jowkar N, Kazi A, Haouchine N, Cheema H, Navab N, Pieper S, Wells WM, Bi WL, Golby A, Frisken S, Kapur T. ReMIND: The Brain Resection Multimodal Imaging Database. Sci Data 2024; 11:494. [PMID: 38744868 PMCID: PMC11093985 DOI: 10.1038/s41597-024-03295-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/23/2024] [Indexed: 05/16/2024] Open
Abstract
The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.
Collapse
Affiliation(s)
| | - Reuben Dorent
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Fryderyk Kögl
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Erickson Torio
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Colton Barr
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Laura Rigolo
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Colin Galvin
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Nick Jowkar
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Anees Kazi
- Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Nazim Haouchine
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Harneet Cheema
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
- Department of Health Science, University of Ottawa, Ottawa, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Steve Pieper
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - William M Wells
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Wenya Linda Bi
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Alexandra Golby
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Sarah Frisken
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Tina Kapur
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| |
Collapse
|
3
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
4
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
5
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
6
|
Wang Y, Fu T, Wu C, Xiao J, Fan J, Song H, Liang P, Yang J. Multimodal registration of ultrasound and MR images using weighted self-similarity structure vector. Comput Biol Med 2023; 155:106661. [PMID: 36827789 DOI: 10.1016/j.compbiomed.2023.106661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 01/22/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
PROPOSE Multimodal registration of 2D Ultrasound (US) and 3D Magnetic Resonance (MR) for fusion navigation can improve the intraoperative detection accuracy of lesion. However, multimodal registration remains a challenge because of the poor US image quality. In the study, a weighted self-similarity structure vector (WSSV) is proposed to registrate multimodal images. METHOD The self-similarity structure vector utilizes the normalized distance of symmetrically located patches in the neighborhood to describe the local structure information. The texture weights are extracted using the local standard deviation to reduce the speckle interference in the US images. The multimodal similarity metric is constructed by combining a self-similarity structure vector with a texture weight map. RESULTS Experiments were performed on US and MR images of the liver from 88 groups of data including 8 patients and 80 simulated samples. The average target registration error was reduced from 14.91 ± 3.86 mm to 4.95 ± 2.23 mm using the WSSV-based method. CONCLUSIONS The experimental results show that the WSSV-based registration method could robustly align the US and MR images of the liver. With further acceleration, the registration framework can be potentially applied in time-sensitive clinical settings, such as US-MR image registration in image-guided surgery.
Collapse
Affiliation(s)
- Yifan Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China.
| | - Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jian Xiao
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, 100853, PR China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China.
| |
Collapse
|
7
|
DiffeoRaptor: diffeomorphic inter-modal image registration using RaPTOR. Int J Comput Assist Radiol Surg 2023; 18:367-377. [PMID: 36173541 DOI: 10.1007/s11548-022-02749-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 09/06/2022] [Indexed: 02/08/2023]
Abstract
PURPOSE Diffeomorphic image registration is essential in many medical imaging applications. Several registration algorithms of such type have been proposed, but primarily for intra-contrast alignment. Currently, efficient inter-modal/contrast diffeomorphic registration, which is vital in numerous applications, remains a challenging task. METHODS We proposed a novel inter-modal/contrast registration algorithm that leverages Robust PaTch-based cOrrelation Ratio metric to allow inter-modal/contrast image alignment and bandlimited geodesic shooting demonstrated in Fourier-Approximated Lie Algebras (FLASH) algorithm for fast diffeomorphic registration. RESULTS The proposed algorithm, named DiffeoRaptor, was validated with three public databases for the tasks of brain and abdominal image registration while comparing the results against three state-of-the-art techniques, including FLASH, NiftyReg, and Symmetric image Normalization (SyN). CONCLUSIONS Our results demonstrated that DiffeoRaptor offered comparable or better registration performance in terms of registration accuracy. Moreover, DiffeoRaptor produces smoother deformations than SyN in inter-modal and contrast registration. The code for DiffeoRaptor is publicly available at https://github.com/nimamasoumi/DiffeoRaptor .
Collapse
|
8
|
Lv J, Wang Z, Shi H, Zhang H, Wang S, Wang Y, Li Q. Joint Progressive and Coarse-to-Fine Registration of Brain MRI via Deformation Field Integration and Non-Rigid Feature Fusion. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2788-2802. [PMID: 35482699 DOI: 10.1109/tmi.2022.3170879] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Registration of brain MRI images requires to solve a deformation field, which is extremely difficult in aligning intricate brain tissues, e.g., subcortical nuclei, etc. Existing efforts resort to decomposing the target deformation field into intermediate sub-fields with either tiny motions, i.e., progressive registration stage by stage, or lower resolutions, i.e., coarse-to-fine estimation of the full-size deformation field. In this paper, we argue that those efforts are not mutually exclusive, and propose a unified framework for robust brain MRI registration in both progressive and coarse-to-fine manners simultaneously. Specifically, building on a dual-encoder U-Net, the fixed-moving MRI pair is encoded and decoded into multi-scale sub-fields from coarse to fine. Each decoding block contains two proposed novel modules: i) in Deformation Field Integration (DFI), a single integrated deformation sub-field is calculated, warping by which is equivalent to warping progressively by sub-fields from all previous decoding blocks, and ii) in Non-rigid Feature Fusion (NFF), features of the fixed-moving pair are aligned by DFI-integrated deformation field, and then fused to predict a finer sub-field. Leveraging both DFI and NFF, the target deformation field is factorized into multi-scale sub-fields, where the coarser fields alleviate the estimate of a finer one and the finer field learns to make up those misalignments insolvable by previous coarser ones. The extensive and comprehensive experimental results on both private and two public datasets demonstrate a superior registration performance of brain MRI images over progressive registration only and coarse-to-fine estimation only, with an increase by at most 8% in the average Dice.
Collapse
|
9
|
Mikaeili M, Bilge HŞ. Trajectory estimation of ultrasound images based on convolutional neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
10
|
Farnia P, Makkiabadi B, Alimohamadi M, Najafzadeh E, Basij M, Yan Y, Mehrmohammadi M, Ahmadian A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. SENSORS 2022; 22:s22062399. [PMID: 35336570 PMCID: PMC8954240 DOI: 10.3390/s22062399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 12/13/2022]
Abstract
Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Bahador Makkiabadi
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maysam Alimohamadi
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran;
| | - Ebrahim Najafzadeh
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Yan Yan
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Mohammad Mehrmohammadi
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
- Barbara Ann Karmanos Cancer Institute, Detroit, MI 48201, USA
- Correspondence: (M.M.); (A.A.)
| | - Alireza Ahmadian
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
- Correspondence: (M.M.); (A.A.)
| |
Collapse
|
11
|
Wu X, Huang W, Wu X, Wu S, Huang J. Classification of thermal image of clinical burn based on incremental reinforcement learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-05772-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Intensity-based nonrigid endomicroscopic image mosaicking incorporating texture relevance for compensation of tissue deformation. Comput Biol Med 2021; 142:105169. [PMID: 34974384 DOI: 10.1016/j.compbiomed.2021.105169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 12/12/2021] [Accepted: 12/20/2021] [Indexed: 12/09/2022]
Abstract
Image mosaicking has emerged as a universal technique to broaden the field-of-view of the probe-based confocal laser endomicroscopy (pCLE) imaging system. However, due to the influence of probe-tissue contact forces and optical components on imaging quality, existing mosaicking methods remain insufficient to deal with practical challenges. In this paper, we present the texture encoded sum of conditional variance (TESCV) as a novel similarity metric, and effectively incorporate it into a sequential mosaicking scheme to simultaneously correct rigid probe shift and nonrigid tissue deformation. TESCV combines both intensity dependency and texture relevance to quantify the differences between pCLE image frames, where a discriminative binary descriptor named fully cross-detected local derivative pattern (FCLDP) is designed to extract more detailed structural textures. Furthermore, we also analytically derive the closed-form gradient of TESCV with respect to the transformation variables. Experiments on the circular dataset highlighted the advantage of the TESCV metric in improving mosaicking performance compared with the other four recently published metrics. The comparison with the other four state-of-the-art mosaicking methods on the spiral and manual datasets indicated that the proposed TESCV-based method not only worked stably at different contact forces, but was also suitable for both low- and high-resolution imaging systems. With more accurate and delicate mosaics, the proposed method holds promises to meet clinical demands for intraoperative optical biopsy.
Collapse
|
13
|
Zhou B, Augenfeld Z, Chapiro J, Zhou SK, Liu C, Duncan JS. Anatomy-guided multimodal registration by learning segmentation without ground truth: Application to intraprocedural CBCT/MR liver segmentation and registration. Med Image Anal 2021; 71:102041. [PMID: 33823397 PMCID: PMC8184611 DOI: 10.1016/j.media.2021.102041] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 03/04/2021] [Accepted: 03/10/2021] [Indexed: 12/24/2022]
Abstract
Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by intraprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT often suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy extractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage annotated datasets already existing in a source modality and propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Zachary Augenfeld
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China; Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| |
Collapse
|
14
|
GoRG: Towards a GPU-Accelerated Multiview Hyperspectral Depth Estimation Tool for Medical Applications. SENSORS 2021; 21:s21124091. [PMID: 34198595 PMCID: PMC8231943 DOI: 10.3390/s21124091] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/26/2021] [Accepted: 06/08/2021] [Indexed: 11/24/2022]
Abstract
HyperSpectral (HS) images have been successfully used for brain tumor boundary detection during resection operations. Nowadays, these classification maps coexist with other technologies such as MRI or IOUS that improve a neurosurgeon’s action, with their incorporation being a neurosurgeon’s task. The project in which this work is framed generates an unified and more accurate 3D immersive model using HS, MRI, and IOUS information. To do so, the HS images need to include 3D information and it needs to be generated in real-time operating room conditions, around a few seconds. This work presents Graph cuts Reference depth estimation in GPU (GoRG), a GPU-accelerated multiview depth estimation tool for HS images also able to process YUV images in less than 5.5 s on average. Compared to a high-quality SoA algorithm, MPEG DERS, GoRG YUV obtain quality losses of −0.93 dB, −0.6 dB, and −1.96% for WS-PSNR, IV-PSNR, and VMAF, respectively, using a video synthesis processing chain. For HS test images, GoRG obtains an average RMSE of 7.5 cm, with most of its errors in the background, needing around 850 ms to process one frame and view. These results demonstrate the feasibility of using GoRG during a tumor resection operation.
Collapse
|
15
|
Multimodal 3D ultrasound and CT in image-guided spinal surgery: public database and new registration algorithms. Int J Comput Assist Radiol Surg 2021; 16:555-565. [PMID: 33683544 DOI: 10.1007/s11548-021-02323-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 02/08/2021] [Indexed: 10/22/2022]
Abstract
PURPOSE Accurate multimodal registration of intraoperative ultrasound (US) and preoperative computed tomography (CT) is a challenging problem. Construction of public datasets of US and CT images can accelerate the development of such image registration techniques. This can help ensure the accuracy and safety of spinal surgeries using image-guided surgery systems where an image registration is employed. In addition, we present two algorithms to register US and CT images. METHODS We present three different datasets of vertebrae with corresponding CT, US, and simulated US images. For each of the two latter datasets, we also provide 16 landmark pairs of matching structures between the CT and US images and performed fiducial registration to acquire a silver standard for assessing image registration. Besides, we proposed two patch-based rigid image registration algorithms, one based on normalized cross-correlation (NCC) and the other based on correlation ratio (CR) to register misaligned CT and US images. RESULTS The CT and corresponding US images of the proposed database were pre-processed and misaligned with different error intervals, resulting in 6000 registration problems solved using both NCC and CR methods. Our results show that the methods were successful in aligning the pre-processed CT and US images by decreasing the warping index. CONCLUSIONS The database provides a resource for evaluating image registration techniques. The simulated data have two applications. First, they provide the gold standard ground-truth which is difficult to obtain with ex vivo and in vivo data for validating US-CT registration methods. Second, the simulated US images can be used to validate real-time US simulation methods. Besides, the proposed image registration techniques can be useful for developing methods in clinical application.
Collapse
|
16
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
17
|
Gong L, Zheng J, Ping Z, Wang Y, Wang S, Zuo S. Robust Mosaicing of Endomicroscopic Videos via Context-Weighted Correlation Ratio. IEEE Trans Biomed Eng 2021; 68:579-591. [PMID: 32746056 DOI: 10.1109/tbme.2020.3007768] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Probe-based confocal laser endomicroscopy (pCLE) is a promising imaging tool that provides in situ and in vivo optical imaging to perform real-time pathological assessments. However, due to limited field of view, it is difficult for clinicians to get a full understanding of the scanned tissues. In this paper, we develop a novel mosaicing framework to assemble all frame sequences into a full view image. First, a hybrid rigid registration that combines feature matching and template matching is presented to achieve a global alignment of all frames. Then, the parametric free-form deformation (FFD) model with a multiresolution architecture is implemented to accommodate non-rigid tissue distortions. More importantly, we devise a robust similarity metric called context-weighted correlation ratio (CWCR) to promote registration accuracy, where spatial and geometric contexts are incorporated into the estimation of functional intensity dependence. Experiments on both robotic setup and manual manipulation have demonstrated that the proposed scheme significantly precedes some state-of-the-art mosaicing schemes in the presence of intensity fluctuations, insufficient overlap and tissue distortions. Moreover, the comparisons of the proposed CWCR metric and two other metrics have validated the effectiveness of the context-weighted strategy in quantifying the differences between two frames. Benefiting from more rational and delicate mosaics, the proposed scheme is more suitable to instruct diagnosis and treatment during optical biopsies.
Collapse
|
18
|
Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc 2020; 35:6556-6567. [PMID: 33185764 DOI: 10.1007/s00464-020-08153-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/04/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Laparoscopic lateral pelvic lymph node dissection (LPLND) in rectal cancer surgery requires considerable skill because the pelvic arteries, which need to be located to guide the dissection, are covered by other tissues and cannot be observed on laparoscopic views. Therefore, surgeons need to localize the pelvic arteries accurately before dissection, to prevent injury to these arteries. METHODS This report proposes a surgical navigation system to facilitate artery localization in laparoscopic LPLND by combining ultrasonic imaging and laparoscopy. Specifically, free-hand laparoscopic ultrasound (LUS) is employed to capture the arteries intraoperatively in this approach, and a laparoscopic vision-based tracking system is utilized to track the LUS probe. To extract the artery contours from the two-dimensional ultrasound image sequences efficiently, an artery extraction framework based on local phase-based snakes was developed. After reconstructing the three-dimensional intraoperative artery model from ultrasound images, a high-resolution artery model segmented from preoperative computed tomography (CT) images was rigidly registered to the intraoperative artery model and overlaid onto the laparoscopic view to guide laparoscopic LPLND. RESULTS Experiments were conducted to evaluate the performance of the vision-based tracking system, and the average reconstruction error of the proposed tracking system was found to be 2.4 mm. Then, the proposed navigation system was quantitatively evaluated on an artery phantom. The reconstruction time and average navigation error were 8 min and 2.3 mm, respectively. A navigation system was also successfully constructed to localize the pelvic arteries in laparoscopic and open surgeries of a swine. This demonstrated the feasibility of the proposed system in vivo. The construction times in the laparoscopic and open surgeries were 14 and 12 min, respectively. CONCLUSIONS The experimental results showed that the proposed navigation system can guide laparoscopic LPLND and requires a significantly shorter setting time than the state-of-the-art navigation systems do.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
19
|
Farnia P, Mohammadi M, Najafzadeh E, Alimohamadi M, Makkiabadi B, Ahmadian A. High-quality photoacoustic image reconstruction based on deep convolutional neural network: towards intra-operative photoacoustic imaging. Biomed Phys Eng Express 2020; 6:045019. [PMID: 33444279 DOI: 10.1088/2057-1976/ab9a10] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The use of intra-operative imaging system as an intervention solution to provide more accurate localization of complicated structures has become a necessity during the neurosurgery. However, due to the limitations of conventional imaging systems, high-quality real-time intra-operative imaging remains as a challenging problem. Meanwhile, photoacoustic imaging has appeared so promising to provide images of crucial structures such as blood vessels and microvasculature of tumors. To achieve high-quality photoacoustic images of vessels regarding the artifacts caused by the incomplete data, we proposed an approach based on the combination of time-reversal (TR) and deep learning methods. The proposed method applies a TR method in the first layer of the network which is followed by the convolutional neural network with weights adjusted to a set of simulated training data for the other layers to estimate artifact-free photoacoustic images. It was evaluated using a generated synthetic database of vessels. The mean of signal to noise ratio (SNR), peak SNR, structural similarity index, and edge preservation index for the test data were reached 14.6 dB, 35.3 dB, 0.97 and 0.90, respectively. As our results proved, by using the lower number of detectors and consequently the lower data acquisition time, our approach outperforms the TR algorithm in all criteria in a computational time compatible with clinical use.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran, Iran. Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | | | | | | |
Collapse
|
20
|
Evaluation of multi-wavelengths LED-based photoacoustic imaging for maximum safe resection of glioma: a proof of concept study. Int J Comput Assist Radiol Surg 2020; 15:1053-1062. [PMID: 32451814 DOI: 10.1007/s11548-020-02191-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 04/28/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE A real-time intra-operative imaging modality is required to update the navigation systems during neurosurgery, since precise localization and safe maximal resection of gliomas are of utmost clinical importance. Different intra-operative imaging modalities have been proposed to delineate the resection borders, each with advantages and disadvantages. This preliminary study was designed to simulate the photoacoustic imaging (PAI) to illustrate the brain tumor margin vessels for safe maximal resection of glioma. METHODS In this study, light emitting diode (LED)-based PAI was selected because of its lower cost, compact size and ease of use. We developed a simulation framework based on multi-wavelength LED-based PAI to further facilitate PAI during neurosurgery. This framework considers a multilayer model of the tumoral and normal brain tissue. The simulation of the optical fluence and absorption map in tissue at different depths was computed by Monte Carlo. Then, the propagation of initial photoacoustic pressure was simulated by using k-wave toolbox. RESULTS To evaluate the LED-based PAI, we used three evaluation criteria: signal-to-noise ratio (SNR), contrast ratio (CR) and full width of half maximum (FWHM). Results showed that by using proper wavelengths, the vessels were recovered with the same axial and lateral FWHM. Furthermore, by increasing the wavelength from 532 to 1064 nm, SNR and CR were increased in the deep region. The results showed that vessels with larger diameters at same wavelength have a higher CR with average improvement 28%. CONCLUSION Multi-wavelength LED-based PAI provides detailed images of the blood vessels which are crucial for detection of the residual glioma: The longer wavelengths like 1064 nm can be used for the deeper tumor margins, and the shorter wavelengths like 532 nm for tumor margins closer to the surface. LED-based PAI may be considered as a promising intra-operative imaging modality to delineate tumor margins.
Collapse
|
21
|
Xiao Y, Rivaz H, Chabanas M, Fortin M, Machado I, Ou Y, Heinrich MP, Schnabel JA. Evaluation of MRI to Ultrasound Registration Methods for Brain Shift Correction: The CuRIOUS2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:777-786. [PMID: 31425023 PMCID: PMC7611407 DOI: 10.1109/tmi.2019.2935060] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
Collapse
Affiliation(s)
- Yiming Xiao
- the Robarts Research Institute, Western University, London, ON N6A 5B7, Canada
| | - Hassan Rivaz
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Matthieu Chabanas
- the School of Computer Science and Applied Mathematics, Grenoble Institute of Technology, 38031 Grenoble, France, and also with the TIMC-IMAG Laboratory, University of Grenoble Alpes, 38400 Grenoble, France
| | - Maryse Fortin
- the PERFORM Centre, Concordia University, Montreal, QC H3G 1M8, Canada, and also with the Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, QC H3G 1M8, Canada
| | - Ines Machado
- the Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Yangming Ou
- the Department of Pediatrics and Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | - Mattias P. Heinrich
- the Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Julia A. Schnabel
- the School of Biomedical Engineering and Imaging Sciences, King’s College London, London WC2R 2LS, U.K
| |
Collapse
|
22
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
23
|
Yang F, Ding M, Zhang X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. SENSORS 2019; 19:s19214675. [PMID: 31661828 PMCID: PMC6864520 DOI: 10.3390/s19214675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 10/05/2019] [Accepted: 10/23/2019] [Indexed: 11/22/2022]
Abstract
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and (proton density) PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
Collapse
Affiliation(s)
- Feng Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
- School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
24
|
Abstract
PURPOSE This pilot study aimed to evaluate the amino acid tracer F-FACBC with simultaneous PET/MRI in diagnostic assessment and neurosurgery of gliomas. MATERIALS AND METHODS Eleven patients with suspected primary or recurrent low- or high-grade glioma received an F-FACBC PET/MRI examination before surgery. PET and MRI were used for diagnostic assessment, and for guiding tumor resection and histopathological tissue sampling. PET uptake, tumor-to-background ratios (TBRs), time-activity curves, as well as PET and MRI tumor volumes were evaluated. The sensitivities of lesion detection and to detect glioma tissue were calculated for PET, MRI, and combined PET/MRI with histopathology (biopsies for final diagnosis and additional image-localized biopsies) as reference. RESULTS Overall sensitivity for lesion detection was 54.5% (95% confidence interval [CI], 23.4-83.3) for PET, 45.5% (95% CI, 16.7-76.6) for contrast-enhanced MRI (MRICE), and 100% (95% CI, 71.5-100.0) for combined PET/MRI, with a significant difference between MRICE and combined PET/MRI (P = 0.031). TBRs increased with tumor grade (P = 0.004) and were stable from 10 minutes post injection. PET tumor volumes enclosed most of the MRICE volumes (>98%) and were generally larger (1.5-2.8 times) than the MRICE volumes. Based on image-localized biopsies, combined PET/MRI demonstrated higher concurrence with malignant findings at histopathology (89.5%) than MRICE (26.3%). CONCLUSIONS Low- versus high-grade glioma differentiation may be possible with F-FACBC using TBR. F-FACBC PET/MRI outperformed MRICE in lesion detection and in detection of glioma tissue. More research is required to evaluate F-FACBC properties, especially in grade II and III tumors, and for different subtypes of gliomas.
Collapse
|
25
|
Farnia P, Najafzadeh E, Ahmadian A, Makkiabadi B, Alimohamadi M, Alirezaie J. Co-Sparse Analysis Model Based Image Registration to Compensate Brain Shift by Using Intra-Operative Ultrasound Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:1-4. [PMID: 30440252 DOI: 10.1109/embc.2018.8512375] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Notwithstanding the widespread use of image guided neurosurgery systems in recent years, the accuracy of these systems is strongly limited by the intra-operative deformation of the brain tissue, the so-called brain shift. Intra-operative ultrasound (iUS) imaging as an effective solution to compensate complex brain shift phenomena update patients coordinate during surgery by registration of the intra-operative ultrasound and the pre-operative MRI data that is a challenging problem.In this work a non-rigid multimodal image registration technique based on co-sparse analysis model is proposed. This model captures the interdependency of two image modalities; MRI as an intensity image and iUS as a depth image. Based on this model, the transformation between the two modalities is minimized by using a bimodal pair of analysis operators which are learned by optimizing a joint co-sparsity function using a conjugate gradient.Experimental validation of our algorithm confirms that our registration approach outperforms several of other state-of-the-art registration methods quantitatively. The evaluation was performed using seven patient dataset with the mean registration error of only 1.83 mm. Our intensity-based co-sparse analysis model has improved the accuracy of non-rigid multimodal medical image registration by 15.37% compared to the curvelet based residual complexity as a powerful registration method, in a computational time compatible with clinical use.
Collapse
|
26
|
Pohlman RM, Turney MR, Wu P, Brace CL, Ziemlewicz TJ, Varghese T. Two-dimensional ultrasound-computed tomography image registration for monitoring percutaneous hepatic intervention. Med Phys 2019; 46:2600-2609. [PMID: 31009079 PMCID: PMC6758542 DOI: 10.1002/mp.13554] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 04/14/2019] [Accepted: 04/15/2019] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Deformable registration of ultrasound (US) and contrast enhanced computed tomography (CECT) images are essential for quantitative comparison of ablation boundaries and dimensions determined using these modalities. This comparison is essential as stiffness-based imaging using US has become popular and offers a nonionizing and cost-effective imaging modality for monitoring minimally invasive microwave ablation procedures. A sensible manual registration method is presented that performs the required CT-US image registration. METHODS The two-dimensional (2D) virtual CT image plane that corresponds to the clinical US B-mode was obtained by "virtually slicing" the 3D CT volume along the plane containing non-anatomical landmarks, namely points along the microwave ablation antenna. The initial slice plane was generated using the vector acquired by rotating the normal vector of the transverse (i.e., xz) plane along the angle subtended by the antenna. This plane was then further rotated along the ablation antenna and shifted along with the direction of normal vector to obtain similar anatomical structures, such as the liver surface and vasculature that is visualized on both the CT virtual slice and US B-mode images on 20 patients. Finally, an affine transformation was estimated using anatomic and non-anatomic landmarks to account for distortion between the colocated CT virtual slice and US B-mode image resulting in a final registered CT virtual slice. Registration accuracy was measured by estimating the Euclidean distance between corresponding registered points on CT and US B-mode images. RESULTS Mean and SD of the affine transformed registration error was 1.85 ± 2.14 (mm), computed from 20 coregistered data sets. CONCLUSIONS Our results demonstrate the ability to obtain 2D virtual CT slices that are registered to clinical US B-mode images. The use of both anatomical and non-anatomical landmarks result in accurate registration useful for validating ablative margins and comparison to electrode displacement elastography based images.
Collapse
Affiliation(s)
- Robert M. Pohlman
- Department of Medical PhysicsUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| | - Michael R. Turney
- Department of Medical PhysicsUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| | - Po‐Hung Wu
- Department of RadiologyUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| | - Christopher L. Brace
- Department of RadiologyUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| | - Timothy J. Ziemlewicz
- Department of RadiologyUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| | - Tomy Varghese
- Department of Medical PhysicsUniversity of Wisconsin School of Medicine and Public HealthUniversity of Wisconsin‐MadisonMadisonWI53706USA
| |
Collapse
|
27
|
Santos CAN, Mascarenhas NDA. Patch similarity in ultrasound images with hypothesis testing and stochastic distances. Comput Med Imaging Graph 2019; 74:37-48. [PMID: 30978595 DOI: 10.1016/j.compmedimag.2019.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Revised: 02/26/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
Abstract
Patch-based techniques have been largely applied to process ultrasound (US) images, with applications in various fields as denoising, segmentation, and registration. An important aspect of the performance of these techniques is how to measure the similarity between patches. While it is usual to base the similarity on the Euclidean distance when processing images corrupted by additive Gaussian noise, finding measures suitable for the multiplicative nature of the speckle in US images is still an open research. In this work, we propose new stochastic distances based on the statistical characteristics of speckle in US. Additionally, we derive statistical measures to compose hypothesis tests that allow a quantitative decision on the patch similarity of US images. Good results with experiments in denoising, segmentation and selecting similar patches confirm the potential of the proposed measures.
Collapse
Affiliation(s)
- Cid A N Santos
- Federal University of São Carlos, Washington Luís Highway, km 235, PO Box 676, São Carlos, Brazil.
| | - Nelson D A Mascarenhas
- Federal University of São Carlos, Washington Luís Highway, km 235, PO Box 676, São Carlos, Brazil; Centro Universitário Campo Limpo Paulista, Guatemala Street, 167, Campo Limpo Paulista, Brazil
| |
Collapse
|
28
|
Automatic and efficient MRI-US segmentations for improving intraoperative image fusion in image-guided neurosurgery. NEUROIMAGE-CLINICAL 2019; 22:101766. [PMID: 30901714 PMCID: PMC6425116 DOI: 10.1016/j.nicl.2019.101766] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 01/20/2019] [Accepted: 03/10/2019] [Indexed: 11/24/2022]
Abstract
Knowledge of the exact tumor location and structures at risk in its vicinity are crucial for neurosurgical interventions. Neuronavigation systems support navigation within the patient's brain, based on preoperative MRI (preMRI). However, increasing tissue deformation during the course of tumor resection reduces navigation accuracy based on preMRI. Intraoperative ultrasound (iUS) is therefore used as real-time intraoperative imaging. Registration of preMRI and iUS remains a challenge due to different or varying contrasts in iUS and preMRI. Here, we present an automatic and efficient segmentation of B-mode US images to support the registration process. The falx cerebri and the tentorium cerebelli were identified as examples for central cerebral structures and their segmentations can serve as guiding frame for multi-modal image registration. Segmentations of the falx and tentorium were performed with an average Dice coefficient of 0.74 and an average Hausdorff distance of 12.2 mm. The subsequent registration incorporates these segmentations and increases accuracy, robustness and speed of the overall registration process compared to purely intensity-based registration. For validation an expert manually located corresponding landmarks. Our approach reduces the initial mean Target Registration Error from 16.9 mm to 3.8 mm using our intensity-based registration and to 2.2 mm with our combined segmentation and registration approach. The intensity-based registration reduced the maximum initial TRE from 19.4 mm to 5.6 mm, with the approach incorporating segmentations this is reduced to 3.0 mm. Mean volumetric intensity-based registration of preMRI and iUS took 40.5 s, including segmentations 12.0 s. We demonstrate that our segmentation-based registration increases accuracy, robustness, and speed of multi-modal image registration of preoperative MRI and intraoperative ultrasound images for improving intraoperative image guided neurosurgery. For this we provide a fast and efficient segmentation of central anatomical structures of the perifalcine region on ultrasound images. We demonstrate the advantages of our method by comparing the results of our segmentation-based registration with the initial registration provided by the navigation system and with an intensity-based registration approach.
Collapse
|
29
|
Pinzi M, Galvan S, Rodriguez Y Baena F. The Adaptive Hermite Fractal Tree (AHFT): a novel surgical 3D path planning approach with curvature and heading constraints. IEEE Robot Autom Lett 2019. [PMID: 30790172 DOI: 10.1109/lra.2016.2528292] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
PURPOSE In the context of minimally invasive neurosurgery, steerable needles such as the one developed within the Horizon2020-funded EDEN2020 project (Frasson et al. in Proc Inst Mech Eng Part H J Eng Med 224(6):775-88, 2010. https://doi.org/10.1243/09544119JEIM663 ; Secoli and y Baena in IEEE international conference on robotics and automation, 2013) aspire to address the clinical challenge of better treatment for cancer patients. The direct, precise infusion of drugs in the proximity of a tumor has been shown to enhance its effectiveness and diffusion in the surrounding tissue (Vogelbaum and Aghi in Neuro-Oncology 17(suppl 2):ii3-ii8, 2015. https://doi.org/10.1093/neuonc/nou354 ). However, planning for an appropriate insertion trajectory for needles such as the one proposed by EDEN2020 is challenging due to factors like kinematic constraints, the presence of complex anatomical structures such as brain vessels, and constraints on the required start and target poses. METHODS We propose a new parallelizable three-dimensional (3D) path planning approach called Adaptive Hermite Fractal Tree (AHFT), which is able to generate 3D obstacle-free trajectories that satisfy curvature constraints given a specified start and target pose. The AHFT combines the Adaptive Fractal Tree algorithm's efficiency (Liu et al. in IEEE Robot Autom Lett 1(2):601-608, 2016. https://doi.org/10.1109/LRA.2016.2528292 ) with optimized geometric Hermite (Yong and Cheng in Comput Aided Geom Des 21(3):281-301, 2004. https://doi.org/10.1016/j.cagd.2003.08.003 ) curves, which are able to handle heading constraints. RESULTS Simulated results demonstrate the robustness of the AHFT to perturbations of the target position and target heading. Additionally, a simulated preoperative environment, where the surgeon is able to select a desired entry pose on the patient's skull, confirms the ability of the method to generate multiple feasible trajectories for a patient-specific case. CONCLUSIONS The AHFT method can be adopted in any field of application where a 3D path planner with kinematic and heading constraints on both start and end poses is required.
Collapse
Affiliation(s)
- Marlene Pinzi
- Mechatronics in Medicine Laboratory, Department of Mechanical Engineering, Imperial College, London, UK.
| | - Stefano Galvan
- Mechatronics in Medicine Laboratory, Department of Mechanical Engineering, Imperial College, London, UK
| | | |
Collapse
|
30
|
Banerjee J, Sun Y, Klink C, Gahrmann R, Niessen WJ, Moelker A, van Walsum T. Multiple-correlation similarity for block-matching based fast CT to ultrasound registration in liver interventions. Med Image Anal 2019; 53:132-141. [PMID: 30772666 DOI: 10.1016/j.media.2019.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 01/23/2019] [Accepted: 02/07/2019] [Indexed: 11/24/2022]
Abstract
In this work we present a fast approach to perform registration of computed tomography to ultrasound volumes for image guided intervention applications. The method is based on a combination of block-matching and outlier rejection. The block-matching uses a correlation based multimodal similarity metric, where the intensity and the gradient of the computed tomography images along with the ultrasound volumes are the input images to find correspondences between blocks in the computed tomography and the ultrasound volumes. A variance and octree based feature point-set selection method is used for selecting distinct and evenly spread point locations for block-matching. Geometric consistency and smoothness criteria are imposed in an outlier rejection step to refine the block-matching results. The block-matching results after outlier rejection are used to determine the affine transformation between the computed tomography and the ultrasound volumes. Various experiments are carried out to assess the optimal performance and the influence of parameters on accuracy and computational time of the registration. A leave-one-patient-out cross-validation registration error of 3.6 mm is achieved over 29 datasets, acquired from 17 patients.
Collapse
Affiliation(s)
- Jyotirmoy Banerjee
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Yuanyuan Sun
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Camiel Klink
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Renske Gahrmann
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Quantitative Imaging Group, Faculty of Technical Physics, Delft University of Technology, The Netherlands
| | - Adriaan Moelker
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands.
| |
Collapse
|
31
|
Masoumi N, Xiao Y, Rivaz H. ARENA: Inter-modality affine registration using evolutionary strategy. Int J Comput Assist Radiol Surg 2018; 14:441-450. [DOI: 10.1007/s11548-018-1897-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 12/03/2018] [Indexed: 10/27/2022]
|
32
|
Iversen DH, Wein W, Lindseth F, Unsgård G, Reinertsen I. Automatic Intraoperative Correction of Brain Shift for Accurate Neuronavigation. World Neurosurg 2018; 120:e1071-e1078. [DOI: 10.1016/j.wneu.2018.09.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/30/2018] [Accepted: 09/02/2018] [Indexed: 11/29/2022]
|
33
|
Gong L, Zhang C, Duan L, Du X, Liu H, Chen X, Zheng J. Nonrigid Image Registration Using Spatially Region-Weighted Correlation Ratio and GPU-Acceleration. IEEE J Biomed Health Inform 2018; 23:766-778. [PMID: 29994777 DOI: 10.1109/jbhi.2018.2836380] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Nonrigid image registration with high accuracy and efficiency remains a challenging task for medical image analysis. In this paper, we present the spatially region-weighted correlation ratio (SRWCR) as a novel similarity measure to improve the registration performance. METHODS SRWCR is rigorously deduced from a three-dimension joint probability density function combining the intensity channels with an extra spatial information channel. SRWCR estimates the optimal functional dependence between the intensities for each spatial bin, in which the spatial distribution modeled by a cubic B-spline function is used to differentiate the contribution of voxels. We also analytically derive the gradient of SRWCR with respect to the transformation parameters and optimize it using a quasi-Newton approach. Furthermore, we propose a GPU-based parallel mechanism to accelerate the computation of SRWCR and its derivatives. RESULTS The experiments on synthetic images, public four-dimensional thoracic computed tomography (CT) dataset, retinal optical coherence tomography data, and clinical CT and positron emission tomography images confirm that SRWCR significantly outperforms some state-of-the-art techniques such as spatially encoded mutual information and Robust PaTch-based cOrrelation Ration. CONCLUSION This study demonstrates the advantages of SRWCR in tackling the practical difficulties due to distinct intensity changes, serious speckle noise, or different imaging modalities. SIGNIFICANCE The proposed registration framework might be more reliable to correct the nonrigid deformations and more potential for clinical applications.
Collapse
|
34
|
Shams R, Xiao Y, Hebert F, Abramowitz M, Brooks R, Rivaz H. Assessment of Rigid Registration Quality Measures in Ultrasound-Guided Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:428-437. [PMID: 28976313 DOI: 10.1109/tmi.2017.2755695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Image guidance has become the standard of care for patient positioning in radiotherapy, where image registration is often a critical step to help manage patient motion. However, in practice, verification of registration quality is often adversely affected by difficulty in manual inspection of 3-D images and time constraint, thus affecting the therapeutic outcome. Therefore, we proposed to employ both bootstrapping and the supervised learning methods of linear discriminant analysis and random forest to help robustly assess registration quality in ultrasound-guided radiotherapy. We validated both approaches using phantom and real clinical ultrasound images, and showed that both performed well for the task. While learning-based techniques offer better accuracy and shorter evaluation time, bootstrapping requires no prior training and has a higher sensitivity.
Collapse
|
35
|
Xiao Y, Eikenes L, Reinertsen I, Rivaz H. Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 2018; 13:457-467. [DOI: 10.1007/s11548-017-1699-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 12/21/2017] [Indexed: 11/24/2022]
|
36
|
Karlberg A, Berntsen EM, Johansen H, Myrthue M, Skjulsvik AJ, Reinertsen I, Esmaeili M, Dai HY, Xiao Y, Rivaz H, Borghammer P, Solheim O, Eikenes L. Multimodal 18 F-Fluciclovine PET/MRI and Ultrasound-Guided Neurosurgery of an Anaplastic Oligodendroglioma. World Neurosurg 2017; 108:989.e1-989.e8. [DOI: 10.1016/j.wneu.2017.08.085] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 08/10/2017] [Accepted: 08/12/2017] [Indexed: 11/28/2022]
|
37
|
Liu X, Tang Z, Wang M, Song Z. Deformable multi-modal registration using 3D-FAST conditioned mutual information. Comput Assist Surg (Abingdon) 2017; 22:295-304. [DOI: 10.1080/24699322.2017.1389408] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Xueli Liu
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhixian Tang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhijian Song
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
38
|
Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I. REtroSpective Evaluation of Cerebral Tumors (RESECT): A clinical database of pre-operative MRI and intra-operative ultrasound in low-grade glioma surgeries. Med Phys 2017; 44:3875-3882. [PMID: 28391601 DOI: 10.1002/mp.12268] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 03/05/2017] [Accepted: 04/05/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The advancement of medical image processing techniques, such as image registration, can effectively help improve the accuracy and efficiency of brain tumor surgeries. However, it is often challenging to validate these techniques with real clinical data due to the rarity of such publicly available repositories. ACQUISITION AND VALIDATION METHODS Pre-operative magnetic resonance images (MRI), and intra-operative ultrasound (US) scans were acquired from 23 patients with low-grade gliomas who underwent surgeries at St. Olavs University Hospital between 2011 and 2016. Each patient was scanned by Gadolinium-enhanced T1w and T2-FLAIR MRI protocols to reveal the anatomy and pathology, and series of B-mode ultrasound images were obtained before, during, and after tumor resection to track the surgical progress and tissue deformation. Retrospectively, corresponding anatomical landmarks were identified across US images of different surgical stages, and between MRI and US, and can be used to validate image registration algorithms. Quality of landmark identification was assessed with intra- and inter-rater variability. DATA FORMAT AND ACCESS In addition to co-registered MRIs, each series of US scans are provided as a reconstructed 3D volume. All images are accessible in MINC2 and NIFTI formats, and the anatomical landmarks were annotated in MNI tag files. Both the imaging data and the corresponding landmarks are available online as the RESECT database at https://archive.norstore.no (search for "RESECT"). POTENTIAL IMPACT The proposed database provides real high-quality multi-modal clinical data to validate and compare image registration algorithms that can potentially benefit the accuracy and efficiency of brain tumor resection. Furthermore, the database can also be used to test other image processing methods and neuro-navigation software platforms.
Collapse
Affiliation(s)
- Yiming Xiao
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Maryse Fortin
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Geirmund Unsgård
- Department of Neurosurgery, St. Olavs University Hospital, Trondheim, NO-7006, Norway.,Department of Neuroscience, Norwegian University of Science and Technology, Trondheim, NO-7491, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| | - Hassan Rivaz
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Ingerid Reinertsen
- Department of Medical Technology, SINTEF, Trondheim, NO-7465, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| |
Collapse
|
39
|
Geometric modeling of hepatic arteries in 3D ultrasound with unsupervised MRA fusion during liver interventions. Int J Comput Assist Radiol Surg 2017; 12:961-972. [PMID: 28271356 DOI: 10.1007/s11548-017-1550-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 02/27/2017] [Indexed: 10/20/2022]
Abstract
PURPOSE Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. METHODS The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. RESULTS 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. CONCLUSION The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.
Collapse
|
40
|
Vijayan RC, Thompson RC, Chambless LB, Morone PJ, He L, Clements LW, Griesenauer RH, Kang H, Miga MI. Android application for determining surgical variables in brain-tumor resection procedures. J Med Imaging (Bellingham) 2017; 4:015003. [PMID: 28331887 DOI: 10.1117/1.jmi.4.1.015003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/13/2017] [Indexed: 11/14/2022] Open
Abstract
The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient's preoperative image volume to more closely match the intraoperative state of the patient's brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient's neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.
Collapse
Affiliation(s)
- Rohan C Vijayan
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Reid C Thompson
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Lola B Chambless
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Peter J Morone
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Le He
- Vanderbilt University Medical Center , Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Rebekah H Griesenauer
- Vanderbilt University , Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Hakmook Kang
- Vanderbilt University Medical Center , Department of Biostatistics, Nashville, Tennessee, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States; Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States; Vanderbilt University Medical Center, Department of Radiology and Radiological Sciences, Nashville, Tennessee, United States
| |
Collapse
|
41
|
Gong L, Wang H, Peng C, Dai Y, Ding M, Sun Y, Yang X, Zheng J. Non-rigid MR-TRUS image registration for image-guided prostate biopsy using correlation ratio-based mutual information. Biomed Eng Online 2017; 16:8. [PMID: 28086888 PMCID: PMC5234261 DOI: 10.1186/s12938-016-0308-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Accepted: 12/27/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND To improve the accuracy of ultrasound-guided biopsy of the prostate, the non-rigid registration of magnetic resonance (MR) images onto transrectal ultrasound (TRUS) images has gained increasing attention. Mutual information (MI) is a widely used similarity criterion in MR-TRUS image registration. However, the use of MI has been challenged because of intensity distortion, noise and down-sampling. Hence, we need to improve the MI measure to get better registration effect. METHODS We present a novel two-dimensional non-rigid MR-TRUS registration algorithm that uses correlation ratio-based mutual information (CRMI) as the similarity criterion. CRMI includes a functional mapping of intensity values on the basis of a generalized version of intensity class correspondence. We also analytically acquire the derivative of CRMI with respect to deformation parameters. Furthermore, we propose an improved stochastic gradient descent (ISGD) optimization method based on the Metropolis acceptance criteria to improve the global optimization ability and decrease the registration time. RESULTS The performance of the proposed method is tested on synthetic images and 12 pairs of clinical prostate TRUS and MR images. By comparing label map registration frame (LMRF) and conditional mutual information (CMI), the proposed algorithm has a significant improvement in the average values of Hausdorff distance and target registration error. Although the average Dice Similarity coefficient is not significantly better than CMI, it still has a crucial increase over LMRF. The average computation time consumed by the proposed method is similar to LMRF, which is 16 times less than CMI. CONCLUSION With more accurate matching performance and lower sensitivity to noise and down-sampling, the proposed algorithm of minimizing CRMI by ISGD is more robust and has the potential for use in aligning TRUS and MR images for needle biopsy.
Collapse
Affiliation(s)
- Lun Gong
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Haifeng Wang
- Department of Urology, Shanghai Changhai Hospital, Shanghai, 200433, China
| | - Chengtao Peng
- Department of Electronic Science and Technology, University of Science and Technology of China, Hefei, 230061, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Min Ding
- School of Science, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Yinghao Sun
- Department of Urology, Shanghai Changhai Hospital, Shanghai, 200433, China
| | - Xiaodong Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
42
|
Mohammadi A, Ahmadian A, Rabbani S, Fattahi E, Shirani S. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study. Int J Med Robot 2016; 13. [PMID: 27917580 DOI: 10.1002/rcs.1792] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Revised: 10/05/2016] [Accepted: 11/01/2016] [Indexed: 11/11/2022]
Abstract
BACKGROUND Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. METHODS The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. RESULTS While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. CONCLUSIONS The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster.
Collapse
Affiliation(s)
- Amrollah Mohammadi
- Department of Medical Physics & Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Ahmadian
- Department of Medical Physics & Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.,Research Centre for Biomedical Technology and Robotics (RCBTR), Tehran, Iran
| | - Shahram Rabbani
- Tehran Heart Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Ehsan Fattahi
- Department of Neurosurgery, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Shapour Shirani
- Tehran Heart Center, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
43
|
Rivaz H. Robust deformable registration of pre- and post-resection ultrasound volumes for visualization of residual tumor in neurosurgery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:141-4. [PMID: 26736220 DOI: 10.1109/embc.2015.7318320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The brain tissue deforms significantly during neurosurgery, which has led to the use of intra-operative ultrasound in many sites to provide updated ultrasound images of tumor and critical parts of the brain. Several factors degrade the quality of post-resection ultrasound images such as hemorrhage, air bubbles in tumor cavity and the application of blood-clotting agent around the edges of the resection. As a result, registration of post- and pre-resection ultrasound is of significant clinical importance. In this paper, we propose a nonrigid symmetric registration (NSR) framework for accurate alignment of pre- and post-resection volumetric ultrasound images in near real-time. We first formulate registration as the minimization of a regularized cost function, and analytically derive its derivative to efficiently optimize the cost function. We use Efficient Second-order Minimization (ESM) method for fast and robust optimization. Furthermore, we use inverse-consistent deformation method to generate realistic deformation fields. The results show that NSR significantly improves the quality of alignment between pre- and post-resection ultrasound images.
Collapse
|
44
|
Zhou H, Rivaz H. Registration of Pre- and Postresection Ultrasound Volumes With Noncorresponding Regions in Neurosurgery. IEEE J Biomed Health Inform 2016; 20:1240-9. [DOI: 10.1109/jbhi.2016.2554122] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
45
|
van der Hoorn A, Yan JL, Larkin TJ, Boonzaier NR, Matys T, Price SJ. Validation of a semi-automatic co-registration of MRI scans in patients with brain tumors during treatment follow-up. NMR IN BIOMEDICINE 2016; 29:882-889. [PMID: 27120035 DOI: 10.1002/nbm.3538] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Revised: 03/21/2016] [Accepted: 03/22/2016] [Indexed: 06/05/2023]
Abstract
There is an expanding research interest in high-grade gliomas because of their significant population burden and poor survival despite the extensive standard multimodal treatment. One of the obstacles is the lack of individualized monitoring of tumor characteristics and treatment response before, during and after treatment. We have developed a two-stage semi-automatic method to co-register MRI scans at different time points before and after surgical and adjuvant treatment of high-grade gliomas. This two-stage co-registration includes a linear co-registration of the semi-automatically derived mask of the preoperative contrast-enhancing area or postoperative resection cavity, brain contour and ventricles between different time points. The resulting transformation matrix was then applied in a non-linear manner to co-register conventional contrast-enhanced T1 -weighted images. Targeted registration errors were calculated and compared with linear and non-linear co-registered images. Targeted registration errors were smaller for the semi-automatic non-linear co-registration compared with both the non-linear and linear co-registered images. This was further visualized using a three-dimensional structural similarity method. The semi-automatic non-linear co-registration allowed for optimal correction of the variable brain shift at different time points as evaluated by the minimal targeted registration error. This proposed method allows for the accurate evaluation of the treatment response, essential for the growing research area of brain tumor imaging and treatment response evaluation in large sets of patients. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Anouk van der Hoorn
- Brain Tumor Imaging Laboratory, Division of Neurosurgery, Department of Clinical Neuroscience, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Department of Radiology, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Department of Radiology (EB44), University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| | - Jiun-Lin Yan
- Brain Tumor Imaging Laboratory, Division of Neurosurgery, Department of Clinical Neuroscience, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Department of Neurosurgery, Chang Gung Memorial Hospital, Taiwan
- Department of Neurosurgery, Chang Gung University College of Medicine, Taiwan
| | - Timothy J Larkin
- Brain Tumor Imaging Laboratory, Division of Neurosurgery, Department of Clinical Neuroscience, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Natalie R Boonzaier
- Brain Tumor Imaging Laboratory, Division of Neurosurgery, Department of Clinical Neuroscience, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Tomasz Matys
- Department of Radiology, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Stephen J Price
- Brain Tumor Imaging Laboratory, Division of Neurosurgery, Department of Clinical Neuroscience, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| |
Collapse
|
46
|
Technical principles in glioma surgery and preoperative considerations. J Neurooncol 2016; 130:243-252. [DOI: 10.1007/s11060-016-2171-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Accepted: 06/01/2016] [Indexed: 01/16/2023]
|
47
|
Computational Modeling for Enhancing Soft Tissue Image Guided Surgery: An Application in Neurosurgery. Ann Biomed Eng 2015; 44:128-38. [PMID: 26354118 DOI: 10.1007/s10439-015-1433-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 08/18/2015] [Indexed: 01/14/2023]
Abstract
With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.
Collapse
|
48
|
Deformable registration of preoperative MR, pre-resection ultrasound, and post-resection ultrasound images of neurosurgery. Int J Comput Assist Radiol Surg 2014; 10:1017-28. [PMID: 25373447 DOI: 10.1007/s11548-014-1099-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 06/17/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE Sites that use ultrasound (US) in image-guided neurosurgery (IGNS) of brain tumors generally have three sets of imaging data: preoperative magnetic resonance (MR) image, pre-resection US, and post-resection US. The MR image is usually acquired days before the surgery, the pre-resection US is obtained after the craniotomy but before the resection, and finally, the post-resection US scan is performed after the resection of the tumor. The craniotomy and tumor resection both cause brain deformation, which significantly reduces the accuracy of the MR-US alignment. METHOD Three unknown transformations exist between the three sets of imaging data: MR to pre-resection US, pre- to post-resection US, and MR to post-resection US. We use two algorithms that we have recently developed to perform the first two registrations (i.e., MR to pre-resection US and pre- to post-resection US). Regarding the third registration (MR to post-resection US), we evaluate three strategies. The first method performs a registration between the MR and pre-resection US, and another registration between the pre- and post-resection US. It then composes the two transformations to register MR and post-resection US; we call this method compositional registration. The second method ignores the pre-resection US and directly registers the MR and post-resection US; we refer to this method as direct registration. The third method is a combination of the first and second: it uses the solution of the compositional registration as an initial solution for the direct registration method. We call this method group-wise registration. RESULTS We use data from 13 patients provided in the MNI BITE database for all of our analysis. Registration of MR and pre-resection US reduces the average of the mean target registration error (mTRE) from 4.1 to 2.4 mm. Registration of pre- and post-resection US reduces the average mTRE from 3.7 to 1.5 mm. Regarding the registration of MR and post-resection US, all three strategies reduce the mTRE. The initial average mTRE is 5.9 mm, which reduces to 3.3 mm with the compositional method, 2.9 mm with the direct technique, and 2.8 mm with the group-wise method. CONCLUSION Deformable registration of MR and pre- and post-resection US images significantly improves their alignment. Among the three methods proposed for registering the MR to post-resection US, the group-wise method gives the lowest TRE values. Since the running time of all registration algorithms is less than 2 min on one core of a CPU, they can be integrated into IGNS systems for interactive use during surgery.
Collapse
|