1
|
Li L, Ding W, Huang L, Zhuang X, Grau V. Multi-modality cardiac image computing: A survey. Med Image Anal 2023; 88:102869. [PMID: 37384950 DOI: 10.1016/j.media.2023.102869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/01/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.
Collapse
Affiliation(s)
- Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Wangbin Ding
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Xing S, Romero JC, Roy P, Cool DW, Tessier D, Chen ECS, Peters TM, Fenster A. 3D US-CT/MRI registration for percutaneous focal liver tumor ablations. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02915-0. [PMID: 37162735 DOI: 10.1007/s11548-023-02915-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 04/10/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE US-guided percutaneous focal liver tumor ablations have been considered promising curative treatment techniques. To address cases with invisible or poorly visible tumors, registration of 3D US with CT or MRI is a critical step. By taking advantage of deep learning techniques to efficiently detect representative features in both modalities, we aim to develop a 3D US-CT/MRI registration approach for liver tumor ablations. METHODS Facilitated by our nnUNet-based 3D US vessel segmentation approach, we propose a coarse-to-fine 3D US-CT/MRI image registration pipeline based on the liver vessel surface and centerlines. Then, phantom, healthy volunteer and patient studies are performed to demonstrate the effectiveness of our proposed registration approach. RESULTS Our nnUNet-based vessel segmentation model achieved a Dice score of 0.69. In healthy volunteer study, 11 out of 12 3D US-MRI image pairs were successfully registered with an overall centerline distance of 4.03±2.68 mm. Two patient cases achieved target registration errors (TRE) of 4.16 mm and 5.22 mm. CONCLUSION We proposed a coarse-to-fine 3D US-CT/MRI registration pipeline based on nnUNet vessel segmentation models. Experiments based on healthy volunteers and patient trials demonstrated the effectiveness of our registration workflow. Our code and example data are publicly available in this r epository.
Collapse
Affiliation(s)
- Shuwei Xing
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada.
- School of Biomedical Engineering, Western University, 100 Perth St., London, ON, N6A 5B7, Canada.
| | - Joeana Cambranis Romero
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- School of Biomedical Engineering, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
| | - Priyanka Roy
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Lawson Health Research Institute, 100 Perth St., London, N6A 5B7, ON, Canada
| | - Derek W Cool
- Department of Medical Imaging, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Lawson Health Research Institute, 100 Perth St., London, N6A 5B7, ON, Canada
| | - David Tessier
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
| | - Elvis C S Chen
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- School of Biomedical Engineering, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Biophysics, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Imaging, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Lawson Health Research Institute, 100 Perth St., London, N6A 5B7, ON, Canada
| | - Terry M Peters
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- School of Biomedical Engineering, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Biophysics, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Imaging, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
| | - Aaron Fenster
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- School of Biomedical Engineering, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Biophysics, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Department of Medical Imaging, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
| |
Collapse
|
3
|
Wang Y, Fu T, Wu C, Xiao J, Fan J, Song H, Liang P, Yang J. Multimodal registration of ultrasound and MR images using weighted self-similarity structure vector. Comput Biol Med 2023; 155:106661. [PMID: 36827789 DOI: 10.1016/j.compbiomed.2023.106661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 01/22/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
PROPOSE Multimodal registration of 2D Ultrasound (US) and 3D Magnetic Resonance (MR) for fusion navigation can improve the intraoperative detection accuracy of lesion. However, multimodal registration remains a challenge because of the poor US image quality. In the study, a weighted self-similarity structure vector (WSSV) is proposed to registrate multimodal images. METHOD The self-similarity structure vector utilizes the normalized distance of symmetrically located patches in the neighborhood to describe the local structure information. The texture weights are extracted using the local standard deviation to reduce the speckle interference in the US images. The multimodal similarity metric is constructed by combining a self-similarity structure vector with a texture weight map. RESULTS Experiments were performed on US and MR images of the liver from 88 groups of data including 8 patients and 80 simulated samples. The average target registration error was reduced from 14.91 ± 3.86 mm to 4.95 ± 2.23 mm using the WSSV-based method. CONCLUSIONS The experimental results show that the WSSV-based registration method could robustly align the US and MR images of the liver. With further acceleration, the registration framework can be potentially applied in time-sensitive clinical settings, such as US-MR image registration in image-guided surgery.
Collapse
Affiliation(s)
- Yifan Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China.
| | - Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jian Xiao
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, 100853, PR China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China.
| |
Collapse
|
4
|
Wang S, Niu K, Chen L, Rao X. Method for counting labeled neurons in mouse brain regions based on image representation and registration. Med Biol Eng Comput 2022; 60:487-500. [PMID: 35015271 DOI: 10.1007/s11517-021-02495-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/18/2021] [Indexed: 11/25/2022]
Abstract
An important step in brain image analysis is to divide specific brain regions by matching brain slices to standard brain reference atlases, and perform statistical analysis on the labeled neurons in each brain region. Taking mouse fluorescently labeled brain slices as an example, due to the noise and distortion introduced during the preparation of brain slices, and the modal differences with standard brain atlas, the brain slices cannot directly establish an accurate one-to-one correspondence with the brain atlas, which in turn affects the accuracy of the number of labeled neurons in each brain region. This paper introduces the idea of image representation, uses neural networks to realize the registration of different modal mouse brain slices and brain atlas, completes the regional localization of the brain slices, and uses threshold segmentation to detect and count the labeled neurons in each brain region. The method proposed in this paper can effectively solve the problem of large deviation of neurons count caused by the inaccurate division of brain regions in large deformed brain slices, and can automatically realize accurate count of labeled neurons in each brain region of brain slices. The whole framework of method for counting labeled neurons in mouse brain regions based on image representation and registration.
Collapse
Affiliation(s)
- Songwei Wang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Ke Niu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Liwei Chen
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China.
| | - Xiaoping Rao
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Key Laboratory of Magnetic Resonance in Biological Systems, Wuhan Center for Magnetic Resonance, Innovation Academy for Precision Measurement Science and Methodology, Chinese Academy of Sciences, Wuhan, 430071, China.
| |
Collapse
|
5
|
Good and bad boundaries in ultrasound compounding: preserving anatomic boundaries while suppressing artifacts. Int J Comput Assist Radiol Surg 2021; 16:1957-1968. [PMID: 34357525 PMCID: PMC8589734 DOI: 10.1007/s11548-021-02464-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 07/15/2021] [Indexed: 11/23/2022]
Abstract
Purpose Ultrasound compounding is to combine sonographic information captured from different angles and produce a single image. It is important for multi-view reconstruction, but as of yet there is no consensus on best practices for compounding. Current popular methods inevitably suppress or altogether leave out bright or dark regions that are useful and potentially introduce new artifacts. In this work, we establish a new algorithm to compound the overlapping pixels from different viewpoints in ultrasound. Methods Inspired by image fusion algorithms and ultrasound confidence, we uniquely leverage Laplacian and Gaussian pyramids to preserve the maximum boundary contrast without overemphasizing noise, speckles, and other artifacts in the compounded image, while taking the direction of the ultrasound probe into account. Besides, we designed an algorithm that detects the useful boundaries in ultrasound images to further improve the boundary contrast. Results We evaluate our algorithm by comparing it with previous algorithms both qualitatively and quantitatively, and we show that our approach not only preserves both light and dark details, but also somewhat suppresses noise and artifacts, rather than amplifying them. We also show that our algorithm can improve the performance of downstream tasks like segmentation. Conclusion Our proposed method that is based on confidence, contrast, and both Gaussian and Laplacian pyramids appears to be better at preserving contrast at anatomic boundaries while suppressing artifacts than any of the other approaches we tested. This algorithm may have future utility with downstream tasks such as 3D ultrasound volume reconstruction and segmentation.
Collapse
|
6
|
Feasibility of Cochlea High-frequency Ultrasound and Microcomputed Tomography Registration for Cochlear Computer-assisted Surgery: A Testbed. Otol Neurotol 2021; 42:e779-e787. [PMID: 33871251 DOI: 10.1097/mao.0000000000003091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION There remains no standard imaging method that allows computer-assisted surgery of the cochlea in real time. However, recent evidence suggests that high-frequency ultrasound (HFUS) could permit real-time visualization of cochlear architecture. Registration with an imaging modality that suffers neither attenuation nor conical deformation could reveal useful anatomical landmarks to surgeons. Our study aimed to address the feasibility of an automated three-dimensional (3D) HFUS/microCT registration, and to evaluate the identification of cochlear structures using 2D/3D HFUS and microCT. METHODS MicroCT, and 2D/3D 40 MHz US in B-mode were performed on ex vivo guinea pig cochlea. An automatic rigid registration algorithm was applied to segmented 3D images. This automatic registration was then compared to a reference method using manual annotated landmarks placed by two senior otologists. Inter- and intrarater reliabilities were evaluated using intraclass correlation coefficient (ICC) and the mean registration error was calculated. RESULTS 3D HFUS/microCT automatic registration was successful. Excellent levels of concordance were achieved with regards intra-rater reliability for both raters with micro-CT and US images (ICC ranging from 0.98 to 1, p < 0.001) and with regards inter-rater reliability (ICC ranging from 0.99 to 1, p < 0.001). The mean HFUS/microCT automated RE for both observers was 0.17 ± 0.03 mm [0.10-0.25]. Identification of the basilar membrane, modiolus, scala tympani, and scala vestibuli was possible with 2D/3D HFUS and micro-CT. CONCLUSIONS HFUS/microCT image registration is feasible. 2D/3D HFUS and microCT allow the visualization of cochlear structures. Many potential clinical applications are conceivable.
Collapse
|
7
|
Regional Localization of Mouse Brain Slices Based on Unified Modal Transformation. Symmetry (Basel) 2021. [DOI: 10.3390/sym13060929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain science research often requires accurate localization and quantitative analysis of neuronal activity in different brain regions. The premise of related analysis is to determine the brain region of each site on the brain slice by referring to the Allen Reference Atlas (ARA), namely the regional localization of the brain slice. The image registration methodology can be used to solve the problem of regional localization. However, the conventional multi-modal image registration method is not satisfactory because of the complexity of modality between the brain slice and the ARA. Inspired by the idea that people can automatically ignore noise and establish correspondence based on key regions, we proposed a novel method known as the Joint Enhancement of Multimodal Information (JEMI) network, which is based on a symmetric encoder–decoder. In this way, the brain slice and the ARA are converted into a segmentation map with unified modality, which greatly reduces the difficulty of registration. Furthermore, combined with the diffeomorphic registration algorithm, the existing topological structure was preserved. The results indicate that, compared with the existing methods, the method proposed in this study can effectively overcome the influence of non-unified modal images and achieve accurate and rapid localization of the brain slice.
Collapse
|
8
|
Salamanca JJ. A universal, canonical dispersive ordering in metric spaces. J Stat Plan Inference 2021. [DOI: 10.1016/j.jspi.2020.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
9
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
10
|
OtoPair: Combining Right and Left Eardrum Otoscopy Images to Improve the Accuracy of Automated Image Analysis. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041831] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The accurate diagnosis of otitis media (OM) and other middle ear and eardrum abnormalities is difficult, even for experienced otologists. In our earlier studies, we developed computer-aided diagnosis systems to improve the diagnostic accuracy. In this study, we investigate a novel approach, called OtoPair, which uses paired eardrum images together rather than using a single eardrum image to classify them as ‘normal’ or ‘abnormal’. This also mimics the way that otologists evaluate ears, because they diagnose eardrum abnormalities by examining both ears. Our approach creates a new feature vector, which is formed with extracted features from a pair of high-resolution otoscope images or images that are captured by digital video-otoscopes. The feature vector has two parts. The first part consists of lookup table-based values created by using deep learning techniques reported in our previous OtoMatch content-based image retrieval system. The second part consists of handcrafted features that are created by recording registration errors between paired eardrums, color-based features, such as histogram of a* and b* component of the L*a*b* color space, and statistical measurements of these color channels. The extracted features are concatenated to form a single feature vector, which is then classified by a tree bagger classifier. A total of 150-pair (300-single) of eardrum images, which are either the same category (normal-normal and abnormal-abnormal) or different category (normal-abnormal and abnormal-normal) pairs, are used to perform several experiments. The proposed approach increases the accuracy from 78.7% (±0.1%) to 85.8% (±0.2%) on a three-fold cross-validation method. These are promising results with a limited number of eardrum pairs to demonstrate the feasibility of using a pair of eardrum images instead of single eardrum images to improve the diagnostic accuracy.
Collapse
|
11
|
Deep multispectral image registration network. Comput Med Imaging Graph 2021; 87:101815. [PMID: 33418174 DOI: 10.1016/j.compmedimag.2020.101815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 09/27/2020] [Accepted: 10/30/2020] [Indexed: 11/24/2022]
Abstract
Multispectral imaging (MSI) of the ocular fundus provides a sequence of narrow-band images to show the different depths in the retina and choroid. One challenge in analyzing MSI images comes from the image-to-image spatial misalignment, which occurs because the acquisition time of eye MSI images is commonly longer than the natural time scale of the eye's saccadic movement. It is necessary to align images because ophthalmologists usually overlay two of the images to analyze specific features when analyzing MSI images. In this paper, we propose a weakly supervised MSI image registration network, called MSI-R-NET, for multispectral fundus image registration. Compared to other deep-learning-based registration methods, MSI-R-NET utilizes the blood vessel segmentation label to provide spatial correspondence. In addition, we employ a feature equilibrium module to connect the aggregating layers better, and propose a multiresolution auto-context structure to adapt the registration task. In the testing stage, given a new pair of MSI images, the trained model can predict the pixelwise spatial correspondence without labeled blood vessel information. The experimental results demonstrate that the proposed segmentation-driven registration method is highly accurate.
Collapse
|
12
|
De Silva T, Chew EY, Hotaling N, Cukras CA. Deep-learning based multi-modal retinal image registration for the longitudinal analysis of patients with age-related macular degeneration. BIOMEDICAL OPTICS EXPRESS 2021; 12:619-636. [PMID: 33520392 PMCID: PMC7818952 DOI: 10.1364/boe.408573] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/29/2020] [Accepted: 10/30/2020] [Indexed: 05/23/2023]
Abstract
This work reports a deep-learning based registration algorithm that aligns multi-modal retinal images collected from longitudinal clinical studies to achieve accuracy and robustness required for analysis of structural changes in large-scale clinical data. Deep-learning networks that mirror the architecture of conventional feature-point-based registration were evaluated with different networks that solved for registration affine parameters, image patch displacements, and patch displacements within the region of overlap. The ground truth images for deep learning-based approaches were derived from successful conventional feature-based registration. Cross-sectional and longitudinal affine registrations were performed across color fundus photography (CFP), fundus autofluorescence (FAF), and infrared reflectance (IR) image modalities. For mono-modality longitudinal registration, the conventional feature-based registration method achieved mean errors in the range of 39-53 µm (depending on the modality) whereas the deep learning method with region overlap prediction exhibited mean errors in the range 54-59 µm. For cross-sectional multi-modality registration, the conventional method exhibited gross failures with large errors in more than 50% of the cases while the proposed deep-learning method achieved robust performance with no gross failures and mean errors in the range 66-69 µm. Thus, the deep learning-based method achieved superior overall performance across all modalities. The accuracy and robustness reported in this work provide important advances that will facilitate clinical research and enable a detailed study of the progression of retinal diseases such as age-related macular degeneration.
Collapse
Affiliation(s)
- Tharindu De Silva
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Emily Y Chew
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Nathan Hotaling
- National Center for Advancing Translational Science, National Institutes of Health, Bethesda, MD 20892, USA
| | - Catherine A Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
13
|
Gueziri HE, Yan CXB, Collins DL. Open-source software for ultrasound-based guidance in spinal fusion surgery. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:3353-3368. [PMID: 32907772 DOI: 10.1016/j.ultrasmedbio.2020.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 07/10/2020] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
Spinal instrumentation and surgical manipulations may cause loss of navigation accuracy requiring an efficient re-alignment of the patient anatomy with pre-operative images during surgery. While intra-operative ultrasound (iUS) guidance has shown clear potential to reduce surgery time, compared with clinical computed tomography (CT) guidance, rapid registration aiming to correct for patient misalignment has not been addressed. In this article, we present an open-source platform for pedicle screw navigation using iUS imaging. The alignment method is based on rigid registration of CT to iUS vertebral images and has been designed for fast and fully automatic patient re-alignment in the operating room. Two steps are involved: first, we use the iUS probe's trajectory to achieve an initial coarse registration; then, the registration transform is refined by simultaneously optimizing gradient orientation alignment and mean of iUS intensities passing through the CT-defined posterior surface of the vertebra. We evaluated our approach on a lumbosacral section of a porcine cadaver with seven vertebral levels. We achieved a median target registration error of 1.47 mm (100% success rate, defined by a target registration error <2 mm) when applying the probe's trajectory initial alignment. The approach exhibited high robustness to partial visibility of the vertebra with success rates of 89.86% and 88.57% when missing either the left or right part of the vertebra and robustness to initial misalignments with a success rate of 83.14% for random starts within ±20° rotation and ±20 mm translation. Our graphics processing unit implementation achieves an efficient registration time under 8 s, which makes the approach suitable for clinical application.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.
| | - Charles X B Yan
- Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - D Louis Collins
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
14
|
Salamanca JJ. Sets that maximize probability and a related variational problem. CAN J STAT 2020. [DOI: 10.1002/cjs.11578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Juan J. Salamanca
- Departamento de Estadística e I.O. y D.M. Escuela Politécnica de Ingeniería, Universidad de Oviedo
| |
Collapse
|
15
|
Halevy-Politch J, Zaaroor M, Sinai A, Constantinescu M. New US device versus imaging US to assess tumor-in-brain. Chin Neurosurg J 2020; 6:28. [PMID: 32922957 PMCID: PMC7405364 DOI: 10.1186/s41016-020-00205-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 06/24/2020] [Indexed: 12/02/2022] Open
Abstract
Background Applying ultrasonic imaging system during surgery requires the poring of saline, performing the measurement, and acquiring data from its display—which requires time and is highly “performer dependent,” i.e., the measure is of a subjective nature. A new ultrasonic device was recently developed that overcomes most of these drawbacks and was successfully applied during tumor-in-brain neurosurgeries. The purpose of this study was to compare the two types of US devices and demonstrate their properties. Methods The study was performed in the following stages: (i) an ex vivo experiment, where slices of the muscle and brain of a young porcine were laid one on top the other. Thicknesses and border depths were measured and compared, using the two types of US instruments. (ii) During human clinical neurosurgeries, tumor depth was compared by measuring it with both devices. (iii) Following the success of stages (i) and (ii), using solely the new US device, the tumor thickness was monitored while its resection. Correlation, Pearson’s coefficient, average, mean, and standard deviation were applied for statistical tests. Results A high correlation was obtained for the distances of tissue borders and for their respective thicknesses. Applying these ultrasonic devices during neurosurgeries, tumor depths were monitored with high similarity (87%), which was also obtained by Pearson’s correlation coefficient (0.44). The new US device, thanks to its small footprint, its remote measurement, and the capability of monitoring intraoperatively and in real-time, provides the approach to tumor’s border before its complete resection. Conclusions The new US device provides better accuracy than an ultrasonic imaging system; its data is objective; it enables to control the residual tumor thickness during its resection, and it is especially useful in restricted areas. These features were found of great help during a tumor-in-brain surgery and especially in the final stages of tumor’s resection.
Collapse
Affiliation(s)
| | | | - Alon Sinai
- Department of Neurosurgery, Rambam HCC, Haifa, Israel
| | | |
Collapse
|
16
|
Chen Y, He F, Li H, Zhang D, Wu Y. A full migration BBO algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106335] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
17
|
Mayer J, Brown R, Thielemans K, Ovtchinnikov E, Pasca E, Atkinson D, Gillman A, Marsden P, Ippoliti M, Makowski M, Schaeffter T, Kolbitsch C. Flexible numerical simulation framework for dynamic PET-MR data. Phys Med Biol 2020; 65:145003. [PMID: 32692725 DOI: 10.1088/1361-6560/ab7eee] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This paper presents a simulation framework for dynamic PET-MR. The main focus of this framework is to provide motion-resolved MR and PET data and ground truth motion information. This can be used in the optimisation and quantitative evaluation of image registration and in assessing the error propagation due to inaccuracies in motion estimation in complex motion-compensated reconstruction algorithms. Contrast and tracer kinetics can also be simulated and are available as ground truth information. To closely emulate medical examination, input and output of the simulation are files in standardised open-source raw data formats. This enables the use of existing raw data as a template input and ensures seamless integration of the output into existing reconstruction pipelines. The proposed framework was validated in PET-MR and image registration applications. It was used to simulate a FDG-PET-MR scan with cardiac and respiratory motion. Ground truth motion information could be utilised to optimise parameters for PET and synergistic PET-MR image registration. In addition, a free-breathing dynamic contrast enhancement (DCE) abdominal scan of a patient with hepatic lesions was simulated. In order to correct for breathing motion, a motion-corrected image reconstruction scheme was used and a Toft's model was fit to the DCE data to obtain quantitative DCE-MRI parameters. Utilising the ground truth motion information, the dependency of quantitative DCE-MR images on the accuracy of the motion estimation was evaluated. We demonstrated that respiratory motion had to be available with an average accuracy of at least the spatial resolution of the DCE-MR images in order to ensure an improvement in lesions visualisation and quantification compared to no motion correction. The proposed framework provides a valuable tool with a wide range of scientific PET and MR applications and will be available as part of the open-source project Synergistic Image Reconstruction Framework (SIRF).
Collapse
Affiliation(s)
- Johannes Mayer
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Yi J, Zhang S, Cao Y, Zhang E, Sun H. Rigid Shape Registration Based on Extended Hamiltonian Learning. ENTROPY 2020; 22:e22050539. [PMID: 33286311 PMCID: PMC7517035 DOI: 10.3390/e22050539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 04/30/2020] [Accepted: 05/11/2020] [Indexed: 11/16/2022]
Abstract
Shape registration, finding the correct alignment of two sets of data, plays a significant role in computer vision such as objection recognition and image analysis. The iterative closest point (ICP) algorithm is one of well known and widely used algorithms in this area. The main purpose of this paper is to incorporate ICP with the fast convergent extended Hamiltonian learning (EHL), so called EHL-ICP algorithm, to perform planar and spatial rigid shape registration. By treating the registration error as the potential for the extended Hamiltonian system, the rigid shape registration is modelled as an optimization problem on the special Euclidean group SE(n)(n=2,3). Our method is robust to initial values and parameters. Compared with some state-of-art methods, our approach shows better efficiency and accuracy by simulation experiments.
Collapse
Affiliation(s)
- Jin Yi
- Department of Basic Courses, Beijing Union University, Beijing 100081, China;
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China; (S.Z.); (Y.C.)
| | - Shiqiang Zhang
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China; (S.Z.); (Y.C.)
| | - Yueqi Cao
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China; (S.Z.); (Y.C.)
| | - Erchuan Zhang
- School of Mathematics and Statistics, University of Western Australia, Crawley WA6009, Australia;
| | - Huafei Sun
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China; (S.Z.); (Y.C.)
- Correspondence:
| |
Collapse
|
19
|
El Mansouri O, Vidal F, Basarab A, Payoux P, Kouame D, Tourneret JY. Fusion of Magnetic Resonance and Ultrasound Images for Endometriosis Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5324-5335. [PMID: 32142435 DOI: 10.1109/tip.2020.2975977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper introduces a new fusion method for magnetic resonance (MR) and ultrasound (US) images, which aims at combining the advantages of each modality, i.e., good contrast and signal to noise ratio for the MR image and good spatial resolution for the US image. The proposed algorithm is based on two inverse problems, performing a super-resolution of the MR image and a denoising of the US image. A polynomial function is introduced to model the relationships between the gray levels of the two modalities. The resulting inverse problem is solved using a proximal alternating linearized minimization framework. The accuracy and the interest of the fusion algorithm are shown quantitatively and qualitatively via evaluations on synthetic and experimental phantom data.
Collapse
|
20
|
Aggarwal V, Gupta A. Integrating Morphological Edge Detection and Mutual Information for Nonrigid Registration of Medical Images. Curr Med Imaging 2019; 15:292-300. [DOI: 10.2174/1573405614666180103163430] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 12/04/2017] [Accepted: 12/21/2017] [Indexed: 11/22/2022]
Abstract
Background:
Medical images are widely used within healthcare and medical research.
There is an increased interest in precisely correlating information in these images through registration
techniques for investigative and therapeutic purposes. This work proposes and evaluates an
improved measure function for registration of carotid ultrasound and magnetic resonance images
(MRI) taken at different times.
Methods:
To achieve this, a morphological edge detection operator has been designed to extract
the vital edge information from images which is integrated with the Mutual Information (MI) to
carry out the registration process. The improved performance of proposed registration measure
function is demonstrated using four quality metrics: Correlation Coefficient (CC), Structural Similarity
Index (SSIM), Visual Information Fidelity (VIF) and Gradient Magnitude Similarity Deviation
(GMSD). The qualitative validation has also been done through visual inspection of the registered
image pairs by clinical radiologists.
Results:
The experimental results showed that the proposed method outperformed the existing
method (based on integrated MI and standard edge detection) for both ultrasound and MR images
in terms of CC by about 4.67%, SSIM by 3.21%, VIF by 18.5%, and decreased GMSD by 37.01%.
Whereas, in comparison to the standard MI based method, the proposed method has increased CC
by 16.29%, SSIM by 16.13%, VIF by 52.56% and decreased GMSD by 66.06%, approximately.
Conclusion:
Thus, the proposed method improves the registration accuracy when the original images
are corrupted by noise, have low intensity values or missing data.
Collapse
Affiliation(s)
- Vivek Aggarwal
- Department of Mechanical Engineering, I. K. Gujral Punjab Technical University, Main Campus, Kapurthala-144603, Punjab, India
| | - Anupama Gupta
- Department of Computer Science and Engineering, Giani Zail Singh Campus College of Engineering and Technology, Maharaja Ranjit Singh Punjab Technical University, Bathinda-151001, Punjab, India
| |
Collapse
|
21
|
Banerjee J, Sun Y, Klink C, Gahrmann R, Niessen WJ, Moelker A, van Walsum T. Multiple-correlation similarity for block-matching based fast CT to ultrasound registration in liver interventions. Med Image Anal 2019; 53:132-141. [PMID: 30772666 DOI: 10.1016/j.media.2019.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 01/23/2019] [Accepted: 02/07/2019] [Indexed: 11/24/2022]
Abstract
In this work we present a fast approach to perform registration of computed tomography to ultrasound volumes for image guided intervention applications. The method is based on a combination of block-matching and outlier rejection. The block-matching uses a correlation based multimodal similarity metric, where the intensity and the gradient of the computed tomography images along with the ultrasound volumes are the input images to find correspondences between blocks in the computed tomography and the ultrasound volumes. A variance and octree based feature point-set selection method is used for selecting distinct and evenly spread point locations for block-matching. Geometric consistency and smoothness criteria are imposed in an outlier rejection step to refine the block-matching results. The block-matching results after outlier rejection are used to determine the affine transformation between the computed tomography and the ultrasound volumes. Various experiments are carried out to assess the optimal performance and the influence of parameters on accuracy and computational time of the registration. A leave-one-patient-out cross-validation registration error of 3.6 mm is achieved over 29 datasets, acquired from 17 patients.
Collapse
Affiliation(s)
- Jyotirmoy Banerjee
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Yuanyuan Sun
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Camiel Klink
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Renske Gahrmann
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Quantitative Imaging Group, Faculty of Technical Physics, Delft University of Technology, The Netherlands
| | - Adriaan Moelker
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands.
| |
Collapse
|
22
|
Walter WR, Burke CJ, Diallo M, Adler RS. Use of a Simple, Inexpensive Dual-Modality Phantom as a Learning Tool for Magnetic Resonance Imaging-Ultrasound Fusion Techniques. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2018; 37:2083-2089. [PMID: 29446113 DOI: 10.1002/jum.14550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Revised: 11/07/2017] [Accepted: 11/09/2017] [Indexed: 06/08/2023]
Abstract
We describe an easily constructed, customizable phantom for magnetic resonance imaging-ultrasound fusion imaging and demonstrate its role as a learning tool to initiate clinical use of this emerging modality. Magnetic resonance imaging-ultrasound fusion can prove unwieldy to integrate into routine practice. We demonstrate real-time fusion with single-sequence magnetic resonance imaging uploaded to the ultrasound console. Phantom training sessions allow radiologists and sonographers to practice fiducial marker selection and improve efficiency with the fusion hardware and software interfaces. Such a tool is useful when the modality is first introduced to a practice and in settings of sporadic use, in which intermittent training may be useful.
Collapse
Affiliation(s)
- William R Walter
- New York University Langone Orthopedic Hospital, New York, New York, USA
| | | | | | - Ronald S Adler
- Center for Musculoskeletal Care, New York University Langone Medical Center, New York, New York, USA
| |
Collapse
|
23
|
Cao X, Yang J, Gao Y, Wang Q, Shen D. Region-adaptive Deformable Registration of CT/MRI Pelvic Images via Learning-based Image Synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:10.1109/TIP.2018.2820424. [PMID: 29994091 PMCID: PMC6165687 DOI: 10.1109/tip.2018.2820424] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Registration of pelvic CT and MRI is highly desired as it can facilitate effective fusion of two modalities for prostate cancer radiation therapy, i.e., using CT for dose planning and MRI for accurate organ delineation. However, due to the large inter-modality appearance gaps and the high shape/appearance variations of pelvic organs, the pelvic CT/MRI registration is highly challenging. In this paper, we propose a region-adaptive deformable registration method for multi-modal pelvic image registration. Specifically, to handle the large appearance gaps, we first perform both CT-to-MRI and MRI-to-CT image synthesis by multi-target regression forest (MT-RF). Then, to use the complementary anatomical information in the two modalities for steering the registration, we select key points automatically from both modalities and use them together for guiding correspondence detection in the region-adaptive fashion. That is, we mainly use CT to establish correspondences for bone regions, and use MRI to establish correspondences for soft tissue regions. The number of key points is increased gradually during the registration, to hierarchically guide the symmetric estimation of the deformation fields. Experiments for both intra-subject and inter-subject deformable registration show improved performances compared with state-of-the-art multi-modal registration methods, which demonstrate the potentials of our method to be applied for the routine prostate cancer radiation therapy.
Collapse
|
24
|
Xiao Y, Eikenes L, Reinertsen I, Rivaz H. Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 2018; 13:457-467. [DOI: 10.1007/s11548-017-1699-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 12/21/2017] [Indexed: 11/24/2022]
|
25
|
Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal 2017; 41:18-31. [PMID: 28533050 PMCID: PMC5896773 DOI: 10.1016/j.media.2017.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 05/05/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
In prostate cancer radiotherapy, computed tomography (CT) is widely used for dose planning purposes. However, because CT has low soft tissue contrast, it makes manual contouring difficult for major pelvic organs. In contrast, magnetic resonance imaging (MRI) provides high soft tissue contrast, which makes it ideal for accurate manual contouring. Therefore, the contouring accuracy on CT can be significantly improved if the contours in MRI can be mapped to CT domain by registering MRI with CT of the same subject, which would eventually lead to high treatment efficacy. In this paper, we propose a bi-directional image synthesis based approach for MRI-to-CT pelvic image registration. First, we use patch-wise random forest with auto-context model to learn the appearance mapping from CT to MRI domain, and then vice versa. Consequently, we can synthesize a pseudo-MRI whose anatomical structures are exactly same with CT but with MRI-like appearance, and a pseudo-CT as well. Then, our MRI-to-CT registration can be steered in a dual manner, by simultaneously estimating two deformation pathways: 1) one from the pseudo-CT to the actual CT and 2) another from actual MRI to the pseudo-MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration pathways by using complementary information from both modalities. Experiments on a dataset with real pelvic CT and MRI have shown improved registration performance of the proposed method by comparing it to the conventional registration methods, thus indicating its high potential of translation to the routine radiation therapy.
Collapse
Affiliation(s)
- Xiaohuan Cao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
26
|
Song G, Han J, Zhao Y, Wang Z, Du H. A Review on Medical Image Registration as an Optimization Problem. Curr Med Imaging 2017; 13:274-283. [PMID: 28845149 PMCID: PMC5543570 DOI: 10.2174/1573405612666160920123955] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 09/05/2016] [Accepted: 09/06/2016] [Indexed: 11/25/2022]
Abstract
Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration.
Collapse
Affiliation(s)
- Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| | - Jianda Han
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Yiwen Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Zheng Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Huibin Du
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| |
Collapse
|
27
|
Burgmans MC, den Harder JM, Meershoek P, van den Berg NS, Chan SXJM, van Leeuwen FWB, van Erkel AR. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions. Cardiovasc Intervent Radiol 2017; 40:914-923. [PMID: 28204959 PMCID: PMC5409927 DOI: 10.1007/s00270-017-1607-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 02/03/2017] [Indexed: 01/05/2023]
Abstract
Purpose To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. Materials and Methods CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Results Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. Conclusion The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Collapse
Affiliation(s)
- Mark Christiaan Burgmans
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands.
| | - J Michiel den Harder
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands
| | - Philippa Meershoek
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands.,Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Nynke S van den Berg
- Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Shaun Xavier Ju Min Chan
- Department of Interventional Radiology, Singapore General Hospital, Outram Road, Singapore, 169608, Singapore
| | - Fijs W B van Leeuwen
- Interventional and Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Arian R van Erkel
- Department of Radiology, Leiden University Medical Centre, Albinusdreef 2, 2300 RC, Leiden, The Netherlands
| |
Collapse
|
28
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
29
|
Burke CJ, Bencardino J, Adler R. The Potential Use of Ultrasound-Magnetic Resonance Imaging Fusion Applications in Musculoskeletal Intervention. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2017; 36:217-224. [PMID: 27914184 DOI: 10.7863/ultra.16.02024] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2016] [Accepted: 04/03/2016] [Indexed: 06/06/2023]
Abstract
We sought to assess the potential use of an application allowing real-time ultrasound spatial registration with previously acquired magnetic resonance imaging in musculoskeletal procedures. The ultrasound fusion application was used to perform a range of outpatient procedures including piriformis, sacroiliac joint, pudendal and intercostal nerve perineurial injections, hamstring-origin calcific tendonopathy barbotage, and 2 soft tissue biopsies at our institution in 2015. The application was used in a total of 7 procedures in 7 patients, all of which were technically successful. The ages of patients ranged from 19 to 86 years. Particular use of the fusion application compared to sonography alone was noted in the biopsy of certain soft tissue lesions and in perineurial therapeutic injections.
Collapse
Affiliation(s)
- Christopher J Burke
- New York University Langone Medical Center, Hospital for Joint Diseases, New York, New York, USA
| | - Jenny Bencardino
- New York University Langone Medical Center, Hospital for Joint Diseases, New York, New York, USA
| | - Ronald Adler
- New York University Langone Medical Center, Hospital for Joint Diseases, New York, New York, USA
| |
Collapse
|
30
|
Yang M, Ding H, Kang J, Cong L, Zhu L, Wang G. Local structure orientation descriptor based on intra-image similarity for multimodal registration of liver ultrasound and MR images. Comput Biol Med 2016; 76:69-79. [DOI: 10.1016/j.compbiomed.2016.06.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 06/11/2016] [Accepted: 06/24/2016] [Indexed: 02/07/2023]
|
31
|
Sastry R, Bi WL, Pieper S, Frisken S, Kapur T, Wells W, Golby AJ. Applications of Ultrasound in the Resection of Brain Tumors. J Neuroimaging 2016; 27:5-15. [PMID: 27541694 DOI: 10.1111/jon.12382] [Citation(s) in RCA: 93] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Revised: 07/04/2016] [Accepted: 07/05/2016] [Indexed: 12/23/2022] Open
Abstract
Neurosurgery makes use of preoperative imaging to visualize pathology, inform surgical planning, and evaluate the safety of selected approaches. The utility of preoperative imaging for neuronavigation, however, is diminished by the well-characterized phenomenon of brain shift, in which the brain deforms intraoperatively as a result of craniotomy, swelling, gravity, tumor resection, cerebrospinal fluid (CSF) drainage, and many other factors. As such, there is a need for updated intraoperative information that accurately reflects intraoperative conditions. Since 1982, intraoperative ultrasound has allowed neurosurgeons to craft and update operative plans without ionizing radiation exposure or major workflow interruption. Continued evolution of ultrasound technology since its introduction has resulted in superior imaging quality, smaller probes, and more seamless integration with neuronavigation systems. Furthermore, the introduction of related imaging modalities, such as 3-dimensional ultrasound, contrast-enhanced ultrasound, high-frequency ultrasound, and ultrasound elastography, has dramatically expanded the options available to the neurosurgeon intraoperatively. In the context of these advances, we review the current state, potential, and challenges of intraoperative ultrasound for brain tumor resection. We begin by evaluating these ultrasound technologies and their relative advantages and disadvantages. We then review three specific applications of these ultrasound technologies to brain tumor resection: (1) intraoperative navigation, (2) assessment of extent of resection, and (3) brain shift monitoring and compensation. We conclude by identifying opportunities for future directions in the development of ultrasound technologies.
Collapse
Affiliation(s)
- Rahul Sastry
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Wenya Linda Bi
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tina Kapur
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Alexandra J Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
32
|
Lobachev O, Ulrich C, Steiniger BS, Wilhelmi V, Stachniss V, Guthe M. Feature-based multi-resolution registration of immunostained serial sections. Med Image Anal 2016; 35:288-302. [PMID: 27494805 DOI: 10.1016/j.media.2016.07.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 07/03/2016] [Accepted: 07/21/2016] [Indexed: 10/21/2022]
Abstract
The form and exact function of the blood vessel network in some human organs, like spleen and bone marrow, are still open research questions in medicine. In this paper, we propose a method to register the immunohistological stainings of serial sections of spleen and bone marrow specimens to enable the visualization and visual inspection of blood vessels. As these vary much in caliber, from mesoscopic (millimeter-range) to microscopic (few micrometers, comparable to a single erythrocyte), we need to utilize a multi-resolution approach. Our method is fully automatic; it is based on feature detection and sparse matching. We utilize a rigid alignment and then a non-rigid deformation, iteratively dealing with increasingly smaller features. Our tool pipeline can already deal with series of complete scans at extremely high resolution, up to 620 megapixels. The improvement presented increases the range of represented details up to smallest capillaries. This paper provides details on the multi-resolution non-rigid registration approach we use. Our application is novel in the way the alignment and subsequent deformations are computed (using features, i.e. "sparse"). The deformations are based on all images in the stack ("global"). We also present volume renderings and a 3D reconstruction of the vascular network in human spleen and bone marrow on a level not possible before. Our registration makes easy tracking of even smallest blood vessels possible, thus granting experts a better comprehension. A quantitative evaluation of our method and related state of the art approaches with seven different quality measures shows the efficiency of our method. We also provide z-profiles and enlarged volume renderings from three different registrations for visual inspection.
Collapse
Affiliation(s)
- Oleg Lobachev
- Visual Computing of University Bayreuth, 95440 Bayreuth, Germany.
| | - Christine Ulrich
- Psychology of Philipps-University Marburg, 35037 Marburg, Germany
| | - Birte S Steiniger
- Institute of Anatomy and Cell Biology of Philipps-University Marburg 35037 Marburg, Germany
| | - Verena Wilhelmi
- Institute of Anatomy and Cell Biology of Philipps-University Marburg 35037 Marburg, Germany
| | - Vitus Stachniss
- Restorative Dentistry and Endodontics of Philipps-University Marburg, 35037 Marburg, Germany
| | - Michael Guthe
- Visual Computing of University Bayreuth, 95440 Bayreuth, Germany
| |
Collapse
|
33
|
The use of ultrasound in intracranial tumor surgery. Acta Neurochir (Wien) 2016; 158:1179-85. [PMID: 27106844 DOI: 10.1007/s00701-016-2803-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 04/04/2016] [Indexed: 01/31/2023]
Abstract
BACKGROUND As an intraoperative imaging modality, ultrasound is a user-friendly and cost-effective real-time imaging technique. Despite this, it is still not routinely employed for brain tumor surgery. This may be due to the poor image quality in inexperienced hands, and the well-documented learning curve. However, with regular use, the operator issues are addressed, and intraoperative ultrasound can provide valuable real-time information. The aim of this review is to provide an understanding for neurosurgeons of the development and use of ultrasound in intracranial tumor surgery, and possible future advances. METHODS A systematic search of the electronic databases Embase, Medline OvidSP, PubMed, Cochrane, and Google Scholar regarding the use of ultrasound in intracranial tumor surgery was undertaken. RESULTS AND DISCUSSION Intraoperative ultrasound has been shown to be able to accurately account for brain shift and has potential for regular use in brain tumor surgery. Further developments in probe size, resolution, and image reconstruction techniques will ensure that intraoperative ultrasound is more accessible and attractive to the neuro-oncological surgeon. CONCLUSIONS This review has summarized the development of ultrasound and its uses with particular reference to brain tumor surgery, detailing the ongoing challenges in this area.
Collapse
|
34
|
State of the Art of Ultrasound-Based Registration in Computer Assisted Orthopedic Interventions. COMPUTATIONAL RADIOLOGY FOR ORTHOPAEDIC INTERVENTIONS 2016. [DOI: 10.1007/978-3-319-23482-3_14] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
35
|
Ghaffari A, Fatemizadeh E. RISM: Single-Modal Image Registration via Rank-Induced Similarity Measure. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:5567-5580. [PMID: 26390463 DOI: 10.1109/tip.2015.2479462] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Similarity measure is an important block in image registration. Most traditional intensity-based similarity measures (e.g., sum-of-squared-difference, correlation coefficient, and mutual information) assume a stationary image and pixel-by-pixel independence. These similarity measures ignore the correlation between pixel intensities; hence, perfect image registration cannot be achieved, especially in the presence of spatially varying intensity distortions. Here, we assume that spatially varying intensity distortion (such as bias field) is a low-rank matrix. Based on this assumption, we formulate the image registration problem as a nonlinear and low-rank matrix decomposition (NLLRMD). Therefore, image registration and correction of spatially varying intensity distortion are simultaneously achieved. We illustrate the uniqueness of NLLRMD, and therefore, we propose the rank of difference image as a robust similarity in the presence of spatially varying intensity distortion. Finally, by incorporating the Gaussian noise, we introduce rank-induced similarity measure based on the singular values of the difference image. This measure produces clinically acceptable registration results on both simulated and real-world problems examined in this paper, and outperforms other state-of-the-art measures such as the residual complexity approach.
Collapse
|
36
|
Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images. Comput Med Imaging Graph 2015; 45:11-25. [DOI: 10.1016/j.compmedimag.2015.07.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 06/26/2015] [Accepted: 07/13/2015] [Indexed: 11/23/2022]
|
37
|
Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery. Int J Comput Assist Radiol Surg 2015; 10:971-9. [PMID: 25895084 DOI: 10.1007/s11548-015-1208-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2015] [Accepted: 04/03/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. METHODS We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. RESULTS We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. CONCLUSION Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.
Collapse
|
38
|
Rivaz H, Chen SJS, Collins DL. Automatic deformable MR-ultrasound registration for image-guided neurosurgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:366-380. [PMID: 25248177 DOI: 10.1109/tmi.2014.2354352] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this work, we present a novel algorithm for registration of 3-D volumetric ultrasound (US) and MR using Robust PaTch-based cOrrelation Ratio (RaPTOR). RaPTOR computes local correlation ratio (CR) values on small patches and adds the CR values to form a global cost function. It is therefore invariant to large amounts of spatial intensity inhomogeneity. We also propose a novel outlier suppression technique based on the orientations of the RaPTOR gradients. Our deformation is modeled with free-form cubic B-splines. We analytically derive the derivatives of RaPTOR with respect to the transformation, i.e., the displacement of the B-spline nodes, and optimize RaPTOR using a stochastic gradient descent approach. RaPTOR is validated on MR and tracked US images of neurosurgery. Deformable registration of the US and MR images acquired, respectively, preoperation and postresection is of significant clinical significance, but challenging due to, among others, the large amount of missing correspondences between the two images. This work is also novel in that it performs automatic registration of this challenging dataset. To validate the results, we manually locate corresponding anatomical landmarks in the US and MR images of tumor resection in brain surgery. Compared to rigid registration based on the tracking system alone, RaPTOR reduces the mean initial mTRE over 13 patients from 5.9 to 2.9 mm, and the maximum initial TRE from 17.0 to 5.9 mm. Each volumetric registration using RaPTOR takes about 30 sec on a single CPU core. An important challenge in the field of medical image analysis is the shortage of publicly available dataset, which can both facilitate the advancement of new algorithms to clinical settings and provide a benchmark for comparison. To address this problem, we will make our manually located landmarks available online.
Collapse
|
39
|
Carvalho DDB, Klein S, Akkus Z, van Dijk AC, Tang H, Selwaness M, Schinkel AFL, Bosch JG, van der Lugt A, Niessen WJ. Joint intensity-and-point based registration of free-hand B-mode ultrasound and MRI of the carotid artery. Med Phys 2014; 41:052904. [PMID: 24784404 DOI: 10.1118/1.4870383] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To introduce a semiautomatic algorithm to perform the registration of free-hand B-Mode ultrasound (US) and magnetic resonance imaging (MRI) of the carotid artery. METHODS The authors' approach combines geometrical features and intensity information. The only user interaction consists of placing three seed points in US and MRI. First, the lumen centerlines are used as landmarks for point based registration. Subsequently, in a joint optimization the distance between centerlines and the dissimilarity of the image intensities is minimized. Evaluation is performed in left and right carotids from six healthy volunteers and five patients with atherosclerosis. For the validation, the authors measure the Dice similarity coefficient (DSC) and the mean surface distance (MSD) between carotid lumen segmentations in US and MRI after registration. The effect of several design parameters on the registration accuracy is investigated by an exhaustive search on a training set of five volunteers and three patients. The optimum configuration is validated on the remaining images of one volunteer and two patients. RESULTS On the training set, the authors achieve an average DSC of 0.74 and a MSD of 0.66 mm on volunteer data. For the patient data, the authors obtain a DSC of 0.77 and a MSD of 0.69 mm. In the independent set composed of patient and volunteer data, the DSC is 0.69 and the MSD is 0.87 mm. The experiments with different design parameters show that nonrigid registration outperforms rigid registration, and that the combination of intensity and point information is superior to approaches that use intensity or points only. CONCLUSIONS The proposed method achieves an accurate registration of US and MRI, and may thus enable multimodal analysis of the carotid plaque.
Collapse
Affiliation(s)
- Diego D B Carvalho
- Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Stefan Klein
- Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Zeynettin Akkus
- Biomedical Engineering, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Anouk C van Dijk
- Department of Radiology, Erasmus MC, Rotterdam 3015 CE, The Netherlands and Department of Neurology, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Hui Tang
- Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 CE, The Netherlands and Imaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology, Delft 2600 AA, The Netherlands
| | - Mariana Selwaness
- Department of Radiology, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Arend F L Schinkel
- Department of Internal Medicine, Division of Pharmacology, Vascular and Metabolic Diseases, Erasmus MC, Rotterdam 3015 CE, The Netherlands and Department of Cardiology, Thoraxcenter, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Johan G Bosch
- Biomedical Engineering, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Aad van der Lugt
- Department of Radiology, Erasmus MC, Rotterdam 3015 CE, The Netherlands
| | - Wiro J Niessen
- Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 CE, The Netherlands and Imaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology, Delft 2600 AA, The Netherlands
| |
Collapse
|
40
|
Deformable registration of preoperative MR, pre-resection ultrasound, and post-resection ultrasound images of neurosurgery. Int J Comput Assist Radiol Surg 2014; 10:1017-28. [PMID: 25373447 DOI: 10.1007/s11548-014-1099-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 06/17/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE Sites that use ultrasound (US) in image-guided neurosurgery (IGNS) of brain tumors generally have three sets of imaging data: preoperative magnetic resonance (MR) image, pre-resection US, and post-resection US. The MR image is usually acquired days before the surgery, the pre-resection US is obtained after the craniotomy but before the resection, and finally, the post-resection US scan is performed after the resection of the tumor. The craniotomy and tumor resection both cause brain deformation, which significantly reduces the accuracy of the MR-US alignment. METHOD Three unknown transformations exist between the three sets of imaging data: MR to pre-resection US, pre- to post-resection US, and MR to post-resection US. We use two algorithms that we have recently developed to perform the first two registrations (i.e., MR to pre-resection US and pre- to post-resection US). Regarding the third registration (MR to post-resection US), we evaluate three strategies. The first method performs a registration between the MR and pre-resection US, and another registration between the pre- and post-resection US. It then composes the two transformations to register MR and post-resection US; we call this method compositional registration. The second method ignores the pre-resection US and directly registers the MR and post-resection US; we refer to this method as direct registration. The third method is a combination of the first and second: it uses the solution of the compositional registration as an initial solution for the direct registration method. We call this method group-wise registration. RESULTS We use data from 13 patients provided in the MNI BITE database for all of our analysis. Registration of MR and pre-resection US reduces the average of the mean target registration error (mTRE) from 4.1 to 2.4 mm. Registration of pre- and post-resection US reduces the average mTRE from 3.7 to 1.5 mm. Regarding the registration of MR and post-resection US, all three strategies reduce the mTRE. The initial average mTRE is 5.9 mm, which reduces to 3.3 mm with the compositional method, 2.9 mm with the direct technique, and 2.8 mm with the group-wise method. CONCLUSION Deformable registration of MR and pre- and post-resection US images significantly improves their alignment. Among the three methods proposed for registering the MR to post-resection US, the group-wise method gives the lowest TRE values. Since the running time of all registration algorithms is less than 2 min on one core of a CPU, they can be integrated into IGNS systems for interactive use during surgery.
Collapse
|
41
|
Zhang Z, Liu F, Tsui H, Lau Y, Song X. A multiscale adaptive mask method for rigid intraoperative ultrasound and preoperative CT image registration. Med Phys 2014; 41:102903. [DOI: 10.1118/1.4895824] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
42
|
Reinertsen I, Lindseth F, Askeland C, Iversen DH, Unsgård G. Intra-operative correction of brain-shift. Acta Neurochir (Wien) 2014; 156:1301-10. [PMID: 24696180 DOI: 10.1007/s00701-014-2052-6] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2013] [Accepted: 02/22/2014] [Indexed: 12/01/2022]
Abstract
BACKGROUND Brain-shift is a major source of error in neuronavigation systems based on pre-operative images. In this paper, we present intra-operative correction of brain-shift using 3D ultrasound. METHODS The method is based on image registration of vessels extracted from pre-operative MRA and intra-operative power Doppler-based ultrasound and is fully integrated in the neuronavigation software. RESULTS We have performed correction of brain-shift in the operating room during surgery and provided the surgeon with updated information. Here, we present data from seven clinical cases with qualitative and quantitative error measures. CONCLUSION The registration algorithm is fast enough to provide the surgeon with updated information within minutes and accounts for large portions of the experienced shift. Correction of brain-shift can make pre-operative data like fMRI and DTI reliable for a longer period of time and increase the usefulness of the MR data as a supplement to intra-operative 3D ultrasound in terms of overview and interpretation.
Collapse
|
43
|
Zhou W, Zhang L, Xie Y, Liang C. A novel technique for prealignment in multimodality medical image registration. BIOMED RESEARCH INTERNATIONAL 2014; 2014:726852. [PMID: 25162024 PMCID: PMC4055031 DOI: 10.1155/2014/726852] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2014] [Revised: 03/26/2014] [Accepted: 04/11/2014] [Indexed: 11/17/2022]
Abstract
Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization.
Collapse
Affiliation(s)
- Wu Zhou
- Shenzhen Key Laboratory for Low-Cost Healthcare, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lijuan Zhang
- Shenzhen Key Laboratory for Low-Cost Healthcare, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaoqin Xie
- Shenzhen Key Laboratory for Low-Cost Healthcare, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Changhong Liang
- Department of Radiology, Guangdong General Hospital, Guangzhou 510080, China
| |
Collapse
|
44
|
Fuerst B, Wein W, Müller M, Navab N. Automatic ultrasound-MRI registration for neurosurgery using the 2D and 3D LC(2) Metric. Med Image Anal 2014; 18:1312-9. [PMID: 24842859 DOI: 10.1016/j.media.2014.04.008] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2014] [Revised: 03/17/2014] [Accepted: 04/10/2014] [Indexed: 10/25/2022]
Abstract
To enable image guided neurosurgery, the alignment of pre-interventional magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is commonly required. We present two automatic image registration algorithms using the similarity measure Linear Correlation of Linear Combination (LC(2)) to align either freehand US slices or US volumes with MRI images. Both approaches allow an automatic and robust registration, while the three dimensional method yields a significantly improved percentage of optimally aligned registrations for randomly chosen clinically relevant initializations. This study presents a detailed description of the methodology and an extensive evaluation showing an accuracy of 2.51mm, precision of 0.85mm and capture range of 15mm (>95% convergence) using 14 clinical neurosurgical cases.
Collapse
Affiliation(s)
- Bernhard Fuerst
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmannstraße 3, 85748 Garching b. München, Germany; Computer Aided Medical Procedures (CAMP), Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218, USA.
| | - Wolfgang Wein
- ImFusion GmbH, Agnes-Pockels-Bogen 1, 80992 München, Germany.
| | - Markus Müller
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmannstraße 3, 85748 Garching b. München, Germany; ImFusion GmbH, Agnes-Pockels-Bogen 1, 80992 München, Germany.
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmannstraße 3, 85748 Garching b. München, Germany; Computer Aided Medical Procedures (CAMP), Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218, USA.
| |
Collapse
|
45
|
Samir C, Kurtek S, Srivastava A, Canis M. Elastic shape analysis of cylindrical surfaces for 3D/2D registration in endometrial tissue characterization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1035-1043. [PMID: 24770909 DOI: 10.1109/tmi.2014.2300935] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We study the problem of joint registration and deformation analysis of endometrial tissue using 3D magnetic resonance imaging (MRI) and 2D trans-vaginal ultrasound (TVUS) measurements. In addition to the different imaging techniques involved in the two modalities, this problem is complicated due to: 1) different patient pose during MRI and TVUS observations, 2) the 3D nature of MRI and 2D nature of TVUS measurements, 3) the unknown intersecting plane for TVUS in MRI volume, and 4) the potential deformation of endometrial tissue during TVUS measurement process. Focusing on the shape of the tissue, we use expert manual segmentation of its boundaries in the two modalities and apply, with modification, recent developments in shape analysis of parametric surfaces to this problem. First, we extend the 2D TVUS curves to generalized cylindrical surfaces through replication, and then we compare them with MRI surfaces using elastic shape analysis. This shape analysis provides a simultaneous registration (optimal reparameterization) and deformation (geodesic) between any two parametrized surfaces. Specifically, it provides optimal curves on MRI surfaces that match with the original TVUS curves. This framework results in an accurate quantification and localization of the deformable endometrial cells for radiologists, and growth characterization for gynecologists and obstetricians. We present experimental results using semi-synthetic data and real data from patients to illustrate these ideas.
Collapse
|
46
|
Rivaz H, Karimaghaloo Z, Fonov VS, Collins DL. Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:708-725. [PMID: 24595344 DOI: 10.1109/tmi.2013.2294630] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Mutual information (MI) quantifies the information that is shared between two random variables and has been widely used as a similarity metric for multi-modal and uni-modal image registration. A drawback of MI is that it only takes into account the intensity values of corresponding pixels and not of neighborhoods. Therefore, it treats images as "bag of words" and the contextual information is lost. In this work, we present Contextual Conditioned Mutual Information (CoCoMI), which conditions MI estimation on similar structures. Our rationale is that it is more likely for similar structures to undergo similar intensity transformations. The contextual analysis is performed on one of the images offline. Therefore, CoCoMI does not significantly change the registration time. We use CoCoMI as the similarity measure in a regularized cost function with a B-spline deformation field and efficiently optimize the cost function using a stochastic gradient descent method. We show that compared to the state of the art local MI based similarity metrics, CoCoMI does not distort images to enforce erroneous identical intensity transformations for different image structures. We further present the results on nonrigid registration of ultrasound (US) and magnetic resonance (MR) patient data from image-guided neurosurgery trials performed in our institute and publicly available in the BITE dataset. We show that CoCoMI performs significantly better than the state of the art similarity metrics in US to MR registration. It reduces the average mTRE over 13 patients from 4.12 mm to 2.35 mm, and the maximum mTRE from 9.38 mm to 3.22 mm.
Collapse
|
47
|
Kuklisova-Murgasova M, Cifor A, Napolitano R, Papageorghiou A, Quaghebeur G, Rutherford MA, Hajnal JV, Noble JA, Schnabel JA. Registration of 3D fetal neurosonography and MRI. Med Image Anal 2013; 17:1137-50. [PMID: 23969169 PMCID: PMC3807810 DOI: 10.1016/j.media.2013.07.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Revised: 07/01/2013] [Accepted: 07/15/2013] [Indexed: 11/25/2022]
Abstract
We propose a method for registration of 3D fetal brain ultrasound with a reconstructed magnetic resonance fetal brain volume. This method, for the first time, allows the alignment of models of the fetal brain built from magnetic resonance images with 3D fetal brain ultrasound, opening possibilities to develop new, prior information based image analysis methods for 3D fetal neurosonography. The reconstructed magnetic resonance volume is first segmented using a probabilistic atlas and a pseudo ultrasound image volume is simulated from the segmentation. This pseudo ultrasound image is then affinely aligned with clinical ultrasound fetal brain volumes using a robust block-matching approach that can deal with intensity artefacts and missing features in the ultrasound images. A qualitative and quantitative evaluation demonstrates good performance of the method for our application, in comparison with other tested approaches. The intensity average of 27 ultrasound images co-aligned with the pseudo ultrasound template shows good correlation with anatomy of the fetal brain as seen in the reconstructed magnetic resonance image.
Collapse
Affiliation(s)
- Maria Kuklisova-Murgasova
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK; Department of Biomedical Engineering, King's College London, UK; Centre for the Developing Brain, King's College London, UK.
| | | | | | | | | | | | | | | | | |
Collapse
|
48
|
Oreshkin BN, Arbel T. Uncertainty driven probabilistic voxel selection for image registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1777-1790. [PMID: 23708789 DOI: 10.1109/tmi.2013.2264467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper presents a novel probabilistic voxel selection strategy for medical image registration in time-sensitive contexts, where the goal is aggressive voxel sampling (e.g., using less than 1% of the total number) while maintaining registration accuracy and low failure rate. We develop a Bayesian framework whereby, first, a voxel sampling probability field (VSPF) is built based on the uncertainty on the transformation parameters. We then describe a practical, multi-scale registration algorithm, where, at each optimization iteration, different voxel subsets are sampled based on the VSPF. The approach maximizes accuracy without committing to a particular fixed subset of voxels. The probabilistic sampling scheme developed is shown to manage the tradeoff between the robustness of traditional random voxel selection (by permitting more exploration) and the accuracy of fixed voxel selection (by permitting a greater proportion of informative voxels).
Collapse
|
49
|
Cifor A, Risser L, Chung D, Anderson EM, Schnabel JA. Hybrid feature-based diffeomorphic registration for tumor tracking in 2-D liver ultrasound images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1647-56. [PMID: 23674440 DOI: 10.1109/tmi.2013.2262055] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Real-time ultrasound image acquisition is a pivotal resource in the medical community, in spite of its limited image quality. This poses challenges to image registration methods, particularly to those driven by intensity values. We address these difficulties in a novel diffeomorphic registration technique for tumor tracking in series of 2-D liver ultrasound. Our method has two main characteristics: 1) each voxel is described by three image features: intensity, local phase, and phase congruency; 2) we compute a set of forces from either local information (Demons-type of forces), or spatial correspondences supplied by a block-matching scheme, from each image feature. A family of update deformation fields which are defined by these forces, and inform upon the local or regional contribution of each image feature are then composed to form the final transformation. The method is diffeomorphic, which ensures the invertibility of deformations. The qualitative and quantitative results yielded by both synthetic and real clinical data show the suitability of our method for the application at hand.
Collapse
Affiliation(s)
- Amalia Cifor
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | | | | | | | | |
Collapse
|
50
|
Sotiras A, Davatzikos C, Paragios N. Deformable medical image registration: a survey. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1153-90. [PMID: 23739795 PMCID: PMC3745275 DOI: 10.1109/tmi.2013.2265603] [Citation(s) in RCA: 580] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.
Collapse
Affiliation(s)
- Aristeidis Sotiras
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Christos Davatzikos
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Nikos Paragios
- Center for Visual Computing, Department of Applied Mathematics, Ecole Centrale de Paris, Chatenay-Malabry, 92 295 FRANCE, the Equipe Galen, INRIA Saclay - Ile-de-France, Orsay, 91893 FRANCE and the Universite Paris-Est, LIGM (UMR CNRS), Center for Visual Computing, Ecole des Ponts ParisTech, Champs-sur-Marne, 77455 FRANCE
| |
Collapse
|