1
|
Huang M, Feng C, Sun D, Cui M, Zhao D. Segmentation of Clinical Target Volume From CT Images for Cervical Cancer Using Deep Learning. Technol Cancer Res Treat 2023; 22:15330338221139164. [PMID: 36601655 PMCID: PMC9829994 DOI: 10.1177/15330338221139164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Introduction: Segmentation of clinical target volume (CTV) from CT images is critical for cervical cancer brachytherapy, but this task is time-consuming, laborious, and not reproducible. In this work, we aim to propose an end-to-end model to segment CTV for cervical cancer brachytherapy accurately. Methods: In this paper, an improved M-Net model (Mnet_IM) is proposed to segment CTV of cervical cancer from CT images. An input and an output branch are both proposed to attach to the bottom layer to deal with CTV locating challenges due to its lower contrast than surrounding organs and tissues. A progressive fusion approach is then proposed to recover the prediction results layer by layer to enhance the smoothness of segmentation results. A loss function is defined on each of the multiscale outputs to form a deep supervision mechanism. Numbers of feature map channels that are directly connected to inputs are finally homogenized for each image resolution to reduce feature redundancy and computational burden. Result: Experimental results of the proposed model and some representative models on 5438 image slices from 53 cervical cancer patients demonstrate advantages of the proposed model in terms of segmentation accuracy, such as average surface distance, 95% Hausdorff distance, surface overlap, surface dice, and volumetric dice. Conclusion: A better agreement between the predicted CTV from the proposed model Mnet_IM and manually labeled ground truth is obtained compared to some representative state-of-the-art models.
Collapse
Affiliation(s)
- Mingxu Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry
of Education, Shenyang, Liaoning, China
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image, Ministry
of Education, Shenyang, Liaoning, China,School of Computer Science and Engineering, Northeastern
University, Shenyang, Liaoning, China
| | - Deyu Sun
- Department of Radiation Oncology Gastrointestinal and Urinary and
Musculoskeletal Cancer, Cancer Hospital of China Medical
University, Shenyang, Liaoning, China
| | - Ming Cui
- Department of Radiation Oncology Gastrointestinal and Urinary and
Musculoskeletal Cancer, Cancer Hospital of China Medical
University, Shenyang, Liaoning, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry
of Education, Shenyang, Liaoning, China,School of Computer Science and Engineering, Northeastern
University, Shenyang, Liaoning, China,Dazhe Zhao, Key Laboratory of Intelligent
Computing in Medical Image, Ministry of Education, Shenyang, Liaoning 110819,
China.
| |
Collapse
|
2
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
3
|
Zabihollahy F, Viswanathan AN, Schmidt EJ, Lee J. Fully automated segmentation of clinical target volume in cervical cancer from magnetic resonance imaging with convolutional neural network. J Appl Clin Med Phys 2022; 23:e13725. [PMID: 35894782 PMCID: PMC9512359 DOI: 10.1002/acm2.13725] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 06/25/2022] [Indexed: 01/14/2023] Open
Abstract
PURPOSE Contouring clinical target volume (CTV) from medical images is an essential step for radiotherapy (RT) planning. Magnetic resonance imaging (MRI) is used as a standard imaging modality for CTV segmentation in cervical cancer due to its superior soft-tissue contrast. However, the delineation of CTV is challenging as CTV contains microscopic extensions that are not clearly visible even in MR images, resulting in significant contour variability among radiation oncologists depending on their knowledge and experience. In this study, we propose a fully automated deep learning-based method to segment CTV from MR images. METHODS Our method begins with the bladder segmentation, from which the CTV position is estimated in the axial view. The superior-inferior CTV span is then detected using an Attention U-Net. A CTV-specific region of interest (ROI) is determined, and three-dimensional (3-D) blocks are extracted from the ROI volume. Finally, a CTV segmentation map is computed using a 3-D U-Net from the extracted 3-D blocks. RESULTS We developed and evaluated our method using 213 MRI scans obtained from 125 patients (183 for training, 30 for test). Our method achieved (mean ± SD) Dice similarity coefficient of 0.85 ± 0.03 and the 95th percentile Hausdorff distance of 3.70 ± 0.35 mm on test cases, outperforming other state-of-the-art methods significantly (p-value < 0.05). Our method also produces an uncertainty map along with the CTV segmentation by employing the Monte Carlo dropout technique to draw physician's attention to the regions with high uncertainty, where careful review and manual correction may be needed. CONCLUSIONS Experimental results show that the developed method is accurate, fast, and reproducible for contouring CTV from MRI, demonstrating its potential to assist radiation oncologists in alleviating the burden of tedious contouring for RT planning in cervical cancer.
Collapse
Affiliation(s)
- Fatemeh Zabihollahy
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Akila N. Viswanathan
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Ehud J. Schmidt
- Division of Cardiology, Department of MedicineJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation SciencesJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| |
Collapse
|
4
|
Ha IY, Heinrich MP. Modality-agnostic self-supervised deep feature learning and fast instance optimisation for multimodal fusion in ultrasound-guided interventions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106374. [PMID: 34601186 DOI: 10.1016/j.cmpb.2021.106374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.
Collapse
Affiliation(s)
- In Young Ha
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany.
| |
Collapse
|
5
|
|
6
|
Liu X, Gao K, Liu B, Pan C, Liang K, Yan L, Ma J, He F, Zhang S, Pan S, Yu Y. Advances in Deep Learning-Based Medical Image Analysis. HEALTH DATA SCIENCE 2021; 2021:8786793. [PMID: 38487506 PMCID: PMC10880179 DOI: 10.34133/2021/8786793] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/04/2021] [Indexed: 03/17/2024]
Abstract
Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions.Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors.Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.
Collapse
Affiliation(s)
| | | | - Bo Liu
- DeepWise AI Lab, BeijingChina
| | | | | | | | | | | | | | - Siyuan Pan
- Shanghai Jiaotong University, Shanghai, China
| | - Yizhou Yu
- DeepWise AI Lab, BeijingChina
- The University of Hong Kong, Hong Kong
| |
Collapse
|
7
|
Liebert A, Tkotz K, Herrler J, Linz P, Mennecke A, German A, Liebig P, Gumbrecht R, Schmidt M, Doerfler A, Uder M, Zaiss M, Nagel AM. Whole-brain quantitative CEST MRI at 7T using parallel transmission methods and B 1 + correction. Magn Reson Med 2021; 86:346-362. [PMID: 33634505 DOI: 10.1002/mrm.28745] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/31/2021] [Accepted: 02/02/2021] [Indexed: 12/16/2022]
Abstract
PURPOSE To enable whole-brain quantitative CEST MRI at ultra-high magnetic field strengths (B0 ≥ 7T) within short acquisition times. METHODS Multiple interleaved mode saturation (MIMOSA) was combined with fast online-customized (FOCUS) parallel transmission (pTx) excitation pulses and B 1 + correction to achieve homogenous whole-brain coverage. Examinations of 13 volunteers were performed on a 7T MRI system with 3 different types of pulse sequences: (1) saturation in circular polarized (CP) mode and CP mode readout, (2) MIMOSA and CP readout, and (3) MIMOSA and FOCUS readout. For comparison, the inverse magnetic transfer ratio metric for relayed nuclear Overhauser effect and amide proton transfer were calculated. To investigate the number of required acquisitions for a good B 1 + correction, 4 volunteers were measured with 6 different B1 amplitudes. Finally, time point repeatability was investigated for 6 volunteers. RESULTS MIMOSA FOCUS sequence using B 1 + correction, with both single and multiple points, reduced inhomogeneity of the CEST contrasts around the occipital lobe and cerebellum. Results indicate that the most stable inter-subject coefficient of variation was achieved using the MIMOSA FOCUS sequence. Time point repeatability of MIMOSA FOCUS with single-point B 1 + correction showed a maximum coefficient of variation below 8% for 3 measurements in a single volunteer. CONCLUSION A combination of MIMOSA FOCUS with a single-point B 1 + correction can be used to achieve quantitative CEST measurements at ultra-high magnetic field strengths. Compared to previous B 1 + correction methods, acquisition time can be reduced as additional scans required for B 1 + correction can be omitted.
Collapse
Affiliation(s)
- Andrzej Liebert
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Katharina Tkotz
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Jürgen Herrler
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Peter Linz
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Department of Nephrology and Hypertension, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Angelika Mennecke
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Alex German
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | | | | | - Manuel Schmidt
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Arnd Doerfler
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Moritz Zaiss
- Department of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,High-Field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics, Tuebingen, Germany
| | - Armin M Nagel
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Institute of Medical Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Division of Medical Physics in Radiology, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| |
Collapse
|
8
|
Bae JP, Yoon S, Vania M, Lee D. Spatiotemporal Free-Form Registration Method Assisted by a Minimum Spanning Tree During Discontinuous Transformations. J Digit Imaging 2021; 34:190-203. [PMID: 33483863 DOI: 10.1007/s10278-020-00409-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 11/02/2020] [Accepted: 11/20/2020] [Indexed: 10/22/2022] Open
Abstract
The sliding motion along the boundaries of discontinuous regions has been actively studied in B-spline free-form deformation framework. This study focusses on the sliding motion for a velocity field-based 3D+t registration. The discontinuity of the tangent direction guides the deformation of the object region, and a separate control of two regions provides a better registration accuracy. The sliding motion under the velocity field-based transformation is conducted under the [Formula: see text]-Rényi entropy estimator using a minimum spanning tree (MST) topology. Moreover, a new topology changing method of the MST is proposed. The topology change is performed as follows: inserting random noise, constructing the MST, and removing random noise while preserving a local connection consistency of the MST. This random noise process (RNP) prevents the [Formula: see text]-Rényi entropy-based registration from degrading in sliding motion, because the RNP creates a small disturbance around special locations. Experiments were performed using two publicly available datasets: the DIR-Lab dataset, which consists of 4D pulmonary computed tomography (CT) images, and a benchmarking framework dataset for cardiac 3D ultrasound. For the 4D pulmonary CT images, RNP produced a significantly improved result for the original MST with sliding motion (p<0.05). For the cardiac 3D ultrasound dataset, only a discontinuity-based registration indicated activity of the RNP. In contrast, the single MST without sliding motion did not show any improvement. These experiments proved the effectiveness of the RNP for sliding motion.
Collapse
Affiliation(s)
- Jang Pyo Bae
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea
| | - Siyeop Yoon
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea.,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea
| | - Malinda Vania
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea.,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea
| | - Deukhee Lee
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Korea. .,Division of Bio-medical Science & Technology, KIST School, Korea University of Science and Technology, 02792, Seoul, Korea.
| |
Collapse
|
9
|
Multi-channel Image Registration of Cardiac MR Using Supervised Feature Learning with Convolutional Encoder-Decoder Network. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279923 DOI: 10.1007/978-3-030-50120-4_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
It is difficult to register the images involving large deformation and intensity inhomogeneity. In this paper, a new multi-channel registration algorithm using modified multi-feature mutual information (α-MI) based on minimal spanning tree (MST) is presented. First, instead of relying on handcrafted features, a convolutional encoder-decoder network is employed to learn the latent feature representation from cardiac MR images. Second, forward computation and backward propagation are performed in a supervised fashion to make the learned features more discriminative. Finally, local features containing appearance information is extracted and integrated into α-MI for achieving multi-channel registration. The proposed method has been evaluated on cardiac cine-MRI data from 100 patients. The experimental results show that features learned from deep network are more effective than handcrafted features in guiding intra-subject registration of cardiac MR images.
Collapse
|
10
|
White IM, Scurr E, Wetscherek A, Brown G, Sohaib A, Nill S, Oelfke U, Dearnaley D, Lalondrelle S, Bhide S. Realizing the potential of magnetic resonance image guided radiotherapy in gynaecological and rectal cancer. Br J Radiol 2019; 92:20180670. [PMID: 30933550 PMCID: PMC6592079 DOI: 10.1259/bjr.20180670] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 02/24/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022] Open
Abstract
CT-based radiotherapy workflow is limited by poor soft tissue definition in the pelvis and reliance on rigid registration methods. Current image-guided radiotherapy and adaptive radiotherapy models therefore have limited ability to improve clinical outcomes. The advent of MRI-guided radiotherapy solutions provides the opportunity to overcome these limitations with the potential to deliver online real-time MRI-based plan adaptation on a daily basis, a true "plan of the day." This review describes the application of MRI guided radiotherapy in two pelvic tumour sites likely to benefit from this approach.
Collapse
Affiliation(s)
- Ingrid M White
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Erica Scurr
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Andreas Wetscherek
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Gina Brown
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Aslam Sohaib
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Simeon Nill
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Uwe Oelfke
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - David Dearnaley
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Susan Lalondrelle
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| | - Shreerang Bhide
- Institute of Cancer Research and Royal Marsden National Health Service Foundation Trust, Sutton, Surrey, UK
| |
Collapse
|
11
|
Image synthesis-based multi-modal image registration framework by using deep fully convolutional networks. Med Biol Eng Comput 2018; 57:1037-1048. [PMID: 30523534 DOI: 10.1007/s11517-018-1924-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Multi-modal image registration has significant meanings in clinical diagnosis, treatment planning, and image-guided surgery. Since different modalities exhibit different characteristics, finding a fast and accurate correspondence between images of different modalities is still a challenge. In this paper, we propose an image synthesis-based multi-modal registration framework. Image synthesis is performed by a ten-layer fully convolutional network (FCN). The network is composed of 10 convolutional layers combined with batch normalization (BN) and rectified linear unit (ReLU), which can be trained to learn an end-to-end mapping from one modality to the other. After the cross-modality image synthesis, multi-modal registration can be transformed into mono-modal registration. The mono-modal registration can be solved by methods with lower computational complexity, such as sum of squared differences (SSD). We tested our method in T1-weighted vs T2-weighted, T1-weighted vs PD, and T2-weighted vs PD image registrations with BrainWeb phantom data and IXI real patients' data. The result shows that our framework can achieve higher registration accuracy than the state-of-the-art multi-modal image registration methods, such as local mutual information (LMI) and α-mutual information (α-MI). The average registration errors of our method in experiment with IXI real patients' data were 1.19, 2.23, and 1.57 compared to 1.53, 2.60, and 2.36 of LMI and 1.34, 2.39, and 1.76 of α-MI in T2-weighted vs PD, T1-weighted vs PD, and T1-weighted vs T2-weighted image registration, respectively. In this paper, we propose an image synthesis-based multi-modal image registration framework. A deep FCN model is developed to perform image synthesis for this framework, which can capture the complex nonlinear relationship between different modalities and discover complex structural representations automatically by a large number of trainable mapping and parameters and perform accurate image synthesis. The framework combined with the deep FCN model and mono-modal registration methods (SSD) can achieve fast and robust results in multi-modal medical image registration. Graphical abstract The workflow of proposed multi-modal image registration framework.
Collapse
|
12
|
Guyader JM, Huizinga W, Fortunati V, Poot DHJ, Veenland JF, Paulides MM, Niessen WJ, Klein S. Groupwise Multichannel Image Registration. IEEE J Biomed Health Inform 2018; 23:1171-1180. [PMID: 29994230 DOI: 10.1109/jbhi.2018.2844361] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Multichannel image registration is an important challenge in medical image analysis. Multichannel images result from modalities such as dual-energy CT or multispectral microscopy. Besides, multichannel feature images can be derived from acquired images, for instance, by applying multiscale feature banks to the original images to register. Multichannel registration techniques have been proposed, but most of them are applicable to only two multichannel images at a time. In the present study, we propose to formulate multichannel registration as a groupwise image registration problem. In this way, we derive a method that allows the registration of two or more multichannel images in a fully symmetric manner (i.e., all images play the same role in the registration procedure), and therefore, has transitive consistency by definition. The method that we introduce is applicable to any number of multichannel images, any number of channels per image, and it allows to take into account correlation between any pair of images and not just corresponding channels. In addition, it is fully modular in terms of dissimilarity measure, transformation model, regularisation method, and optimisation strategy. For two multimodal datasets, we computed feature images from the initially acquired images, and applied the proposed registration technique to the newly created sets of multichannel images. MIND descriptors were used as feature images, and we chose total correlation as groupwise dissimilarity measure. Results show that groupwise multichannel image registration is a competitive alternative to the pairwise multichannel scheme, in terms of registration accuracy and insensitivity towards registration reference spaces.
Collapse
|
13
|
Glodeck D, Hesser J, Zheng L. Potential of metric homotopy between intensity and geometry information for multi-modal 3D registration. Z Med Phys 2018; 28:325-334. [PMID: 29439849 DOI: 10.1016/j.zemedi.2018.01.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 12/08/2017] [Accepted: 01/17/2018] [Indexed: 10/18/2022]
Abstract
This paper focuses on a novel strategy increasing robustness with respect to local optima when using Mutual Information (MI) in multi-modal image registration. This is realized by integrating additional geometry information in the cost function. Hereby, the main innovation is a generalization of multi-metric registration approaches by means of a metric homotopy. Particularly we realize a method which automatically determines the parameters of the metric homotopy. To construct the cost function independent of the choice of the optimizer, the weighting is defined as a function of one of the metrics instead of optimizer steps. In addition, a differentiable cost function is developed. In comparison to the commonly used technique to process an intensity based registration on different resolutions, the proposed method is three times faster with unchanged accuracy. It is also shown that in the presence of large landmark errors the proposed method outperforms an approach in accuracy in which both similarity functionals are applied one after the other. The method is evaluated on 3D multi-modal human brain data sets from the Retrospective Image Registration Evaluation Project (RIRE). The evaluation is performed using the evaluation website of the RIRE project to make the registration results of the proposed method easily comparable to other methods. Therefore, the presented results are also available online on the RIRE project page.
Collapse
Affiliation(s)
- Daniel Glodeck
- Experimental Radiation Oncology, Department of Radiation Oncology, University Medical Center Mannheim, Heidelberg University, Germany.
| | - Jürgen Hesser
- Experimental Radiation Oncology, Department of Radiation Oncology, University Medical Center Mannheim, Heidelberg University, Germany; Interdisziplinary center for scientific computing (IWR), Heidelberg University, Germany.
| | - Lei Zheng
- Experimental Radiation Oncology, Department of Radiation Oncology, University Medical Center Mannheim, Heidelberg University, Germany.
| |
Collapse
|
14
|
Patera A, Carl S, Stampanoni M, Derome D, Carmeliet J. A non-rigid registration method for the analysis of local deformations in the wood cell wall. ACTA ACUST UNITED AC 2018; 4:1. [PMID: 29399437 PMCID: PMC5778174 DOI: 10.1186/s40679-018-0050-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Accepted: 01/05/2018] [Indexed: 11/13/2022]
Abstract
This paper concerns the problem of wood cellular structure image registration. Given the large variability of wood geometry and the important changes in the cellular organization due to moisture sorption, an affine-based image registration technique is not exhaustive to describe the overall hygro-mechanical behaviour of wood at micrometre scales. Additionally, free tools currently available for non-rigid image registration are not suitable for quantifying the structural deformations of complex hierarchical materials such as wood, leading to errors due to misalignment. In this paper, we adapt an existing non-rigid registration model based on B-spline functions to our case study. The so-modified algorithm combines the concept of feature recognition within specific regions locally distributed in the material with an optimization problem. Results show that the method is able to quantify local deformations induced by moisture changes in tomographic images of wood cell wall with high accuracy. The local deformations provide new important insights in characterizing the swelling behaviour of wood at the cell wall level.
Collapse
Affiliation(s)
- Alessandra Patera
- 1Swiss Light Source, Paul Scherrer Institute, Villigen, Switzerland.,2Centre d'Imagerie BioMedicale, Ecole Polytechnique Federale de Lausanne, 1015 Lausanne, Switzerland
| | - Stephan Carl
- 3EMPA, Swiss Federal Laboratories for Materials Science and Technology, Laboratory for Multiscale Studies in Building Physics, Überlandstrasse 129, 8600 Dübendorf, Switzerland
| | - Marco Stampanoni
- 1Swiss Light Source, Paul Scherrer Institute, Villigen, Switzerland.,5ETH Zurich, Institute for Biomedical Engineering, Gloriastrasse 35, 8092 Zurich, Switzerland
| | - Dominique Derome
- 3EMPA, Swiss Federal Laboratories for Materials Science and Technology, Laboratory for Multiscale Studies in Building Physics, Überlandstrasse 129, 8600 Dübendorf, Switzerland
| | - Jan Carmeliet
- 3EMPA, Swiss Federal Laboratories for Materials Science and Technology, Laboratory for Multiscale Studies in Building Physics, Überlandstrasse 129, 8600 Dübendorf, Switzerland.,4ETH Zurich, Chair of Building Physics, Stefano-Franscini-Platz 1, Zürich Hönggerberg, 8093 Zurich, Switzerland
| |
Collapse
|
15
|
Alam F, Rahman SU, Ullah S, Gulati K. Medical image registration in image guided surgery: Issues, challenges and research opportunities. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2017.10.001] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
16
|
Liu X, Tang Z, Wang M, Song Z. Deformable multi-modal registration using 3D-FAST conditioned mutual information. Comput Assist Surg (Abingdon) 2017; 22:295-304. [DOI: 10.1080/24699322.2017.1389408] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Xueli Liu
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhixian Tang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhijian Song
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
17
|
Li L, Pahwa S, Penzias G, Rusu M, Gollamudi J, Viswanath S, Madabhushi A. Co-Registration of ex vivo Surgical Histopathology and in vivo T2 weighted MRI of the Prostate via multi-scale spectral embedding representation. Sci Rep 2017; 7:8717. [PMID: 28821786 PMCID: PMC5562695 DOI: 10.1038/s41598-017-08969-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 07/20/2017] [Indexed: 01/22/2023] Open
Abstract
Multi-modal image co-registration via optimizing mutual information (MI) is based on the assumption that intensity distributions of multi-modal images follow a consistent relationship. However, images with a substantial difference in appearance violate this assumption, thus MI directly based on image intensity alone may be inadequate to drive similarity based co-registration. To address this issue, we introduce a novel approach for multi-modal co-registration called Multi-scale Spectral Embedding Registration (MSERg). MSERg involves the construction of multi-scale spectral embedding (SE) representations from multimodal images via texture feature extraction, scale selection, independent component analysis (ICA) and SE to create orthogonal representations that decrease the dissimilarity between the fixed and moving images to facilitate better co-registration. To validate the MSERg method, we aligned 45 pairs of in vivo prostate MRI and corresponding ex vivo histopathology images. The dataset was split into a learning set and a testing set. In the learning set, length scales of 5 × 5, 7 × 7 and 17 × 17 were selected. In the independent testing set, we compared MSERg with intensity-based registration, multi-attribute combined mutual information (MACMI) registration and scale-invariant feature transform (SIFT) flow registration. Our results suggest that multi-scale SE representations generated by MSERg are found to be more appropriate for radiology-pathology co-registration.
Collapse
Affiliation(s)
- Lin Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, 44106, United States of America.
| | - Shivani Pahwa
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, 44106, United States of America
| | - Gregory Penzias
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, 44106, United States of America
| | - Mirabela Rusu
- GE Global Research, Niskayuna, New York, 12309, United States of America
| | - Jay Gollamudi
- University Hospitals, Cleveland, Ohio, 44106, United States of America
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, 44106, United States of America
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, 44106, United States of America.
| |
Collapse
|
18
|
Song G, Han J, Zhao Y, Wang Z, Du H. A Review on Medical Image Registration as an Optimization Problem. Curr Med Imaging 2017; 13:274-283. [PMID: 28845149 PMCID: PMC5543570 DOI: 10.2174/1573405612666160920123955] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 09/05/2016] [Accepted: 09/06/2016] [Indexed: 11/25/2022]
Abstract
Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration.
Collapse
Affiliation(s)
- Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| | - Jianda Han
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Yiwen Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Zheng Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Huibin Du
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| |
Collapse
|
19
|
Lu X, Yang R, Xie Q, Ou S, Zha Y, Wang D. Nonrigid registration with corresponding points constraint for automatic segmentation of cardiac DSCT images. Biomed Eng Online 2017; 16:39. [PMID: 28351368 PMCID: PMC5370472 DOI: 10.1186/s12938-017-0323-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Accepted: 02/10/2017] [Indexed: 12/01/2022] Open
Abstract
Background Dual-source computed tomography (DSCT) is a very effective way for diagnosis and treatment of heart disease. The quantitative information of spatiotemporal DSCT images can be important for the evaluation of cardiac function. To avoid the shortcoming of manual delineation, it is imperative to develop an automatic segmentation technique for 4D cardiac images. Methods In this paper, we implement the heart segmentation-propagation framework based on nonrigid registration. The corresponding points of anatomical substructures are extracted by using the extension of n-dimensional scale invariant feature transform method. They are considered as a constraint term of nonrigid registration using the free-form deformation, in order to restrain the large variations and boundary ambiguity between subjects. Results We validate our method on 15 patients at ten time phases. Atlases are constructed by the training dataset from ten patients. On the remaining data the median overlap is shown to improve significantly compared to original mutual information, in particular from 0.4703 to 0.5015 (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ p = 5.0 \times 10^{ - 4} $$\end{document}p=5.0×10-4) for left ventricle myocardium and from 0.6307 to 0.6519 (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ p = 6.0 \times 10^{ - 4} $$\end{document}p=6.0×10-4) for right atrium. Conclusions The proposed method outperforms standard mutual information of intensity only. The segmentation errors had been significantly reduced at the left ventricle myocardium and the right atrium. The mean surface distance of using our framework is around 1.73 mm for the whole heart.
Collapse
Affiliation(s)
- Xuesong Lu
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, 430074, People's Republic of China
| | - Rongqian Yang
- School of Materials Science and Engineering, South China University of Technology, Guangzhou, 510006, People's Republic of China.
| | - Qinlan Xie
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, 430074, People's Republic of China
| | - Shanxing Ou
- Radiology Department, Guangzhou General Hospital of Guangzhou Military Area Command, Guangzhou, 510010, People's Republic of China
| | - Yunfei Zha
- Department of Radiology, Remin Hospital of Wuhan University, Wuhan, 430060, People's Republic of China
| | - Defeng Wang
- Research Center for Medical Image Computing, Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China. .,Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, China.
| |
Collapse
|
20
|
Shenoy R, Rose K. Deformable Registration of Biomedical Images Using 2D Hidden Markov Models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4631-4640. [PMID: 27448351 DOI: 10.1109/tip.2016.2592702] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Robust registration of unimodal and multimodal images is a key task in biomedical image analysis, and is often utilized as an initial step on which subsequent analysis techniques critically depend. We propose a novel probabilistic framework, based on a variant of the 2D hidden Markov model, namely, the turbo hidden Markov model, to capture the deformation between pairs of images. The hidden Markov model is tailored to capture spatial transformations across images via state transitions, and modality-specific data costs via emission probabilities. The method is derived for the unimodal setting (where simpler matching metrics may be used) as well as the multimodal setting, where different modalities may provide very different representations for a given class of objects, necessitating the use of advanced similarity measures. We utilize a rich model with hundreds of model parameters to describe the deformation relationships across such modalities. We also introduce a local edge-adaptive constraint to allow for varying degrees of smoothness between object boundaries and homogeneous regions. The parameters of the described method are estimated in a principled manner from training data via maximum likelihood learning, and the deformation is subsequently estimated using an efficient dynamic programming algorithm. Experimental results demonstrate the improved performance of the proposed approach over the state-of-the-art deformable registration techniques, on both unimodal and multimodal biomedical data sets.
Collapse
|
21
|
Alam F, Rahman SU, Khusro S, Ullah S, Khalil A. Evaluation of Medical Image Registration Techniques Based on Nature and Domain of the Transformation. J Med Imaging Radiat Sci 2016; 47:178-193. [PMID: 31047182 DOI: 10.1016/j.jmir.2015.12.081] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 12/14/2015] [Accepted: 12/15/2015] [Indexed: 11/29/2022]
Abstract
A lot of research has been done during the past 20 years in the area of medical image registration for obtaining detailed, important, and complementary information from two or more images and aligning them into a single, more informative image. Nature of the transformation and domain of the transformation are two important medical image registration techniques that deal with characters of objects (motions) in images. This article presents a detailed survey of the registration techniques that belong to both categories with detailed elaboration on their features, issues, and challenges. An investigation estimating similarity and dissimilarity measures and performance evaluation is the main objective of this work. This article also provides reference knowledge in a compact form for researchers and clinicians looking for the proper registration technique for a particular application.
Collapse
Affiliation(s)
- Fakhre Alam
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan.
| | - Sami Ur Rahman
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| | - Shah Khusro
- Department of Computer Science, University of Peshawar, Peshawar, Pakistan
| | - Sehat Ullah
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| | - Adnan Khalil
- Department of Computer Science & IT, University of Malakand, Khyber Pakhtunkhwa, Pakistan
| |
Collapse
|
22
|
Zha Y, Lu X, Wang L, Yang R, Ou S, Xing D, Wang D. Nonrigid Registration Regularized by Shape Information: Application to Atlas Construction of Cardiac CT Images. PLoS One 2015; 10:e0130730. [PMID: 26111054 PMCID: PMC4482436 DOI: 10.1371/journal.pone.0130730] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 05/22/2015] [Indexed: 12/01/2022] Open
Abstract
Cardiac atlases play an important role in the computer-aided diagnosis of cardiovascular diseases, in particular they need to deal with large and highly variable image datasets. In this paper, we propose a new nonrigid registration algorithm incorporating shape information, to produce comprehensive atlases. For one thing, the multiscale gradient orientation features of images are combined to form the construction of multifeature mutual information. Additionally, the shape information of multiple-objects in images is incorporated into the cost function for registration. We demonstrate the merits of the new registration algorithm on the 3D data sets of 15 patients. The experimental results show that the new registration algorithm can outperform the conventional intensity-based registration method. The obtained atlas can represent the cardiac structures more accurately.
Collapse
Affiliation(s)
- Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, P. R. China
| | - Xuesong Lu
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan 430074, P. R. China
| | - Li Wang
- Department of Infection Control, Renmin Hospital of Wuhan University, Wuhan 430060, P. R. China
| | - Rongqian Yang
- School of Materials Science and Engineering, South China University of Technology, Guangzhou 510006, P. R. China
- * E-mail: (RY); (DW)
| | - Shanxing Ou
- Radiology Department, Guangzhou General Hospital of Guangzhou Military Area Command, Guangzhou 510010, P. R. China
| | - Dong Xing
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, P. R. China
| | - Defeng Wang
- Research Center for Medical Image Computing, Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, China
- * E-mail: (RY); (DW)
| |
Collapse
|
23
|
A review of segmentation and deformable registration methods applied to adaptive cervical cancer radiation therapy treatment planning. Artif Intell Med 2015; 64:75-87. [DOI: 10.1016/j.artmed.2015.04.006] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2014] [Revised: 04/16/2015] [Accepted: 04/26/2015] [Indexed: 01/18/2023]
|
24
|
|
25
|
Haack S, Kallehauge JF, Jespersen SN, Lindegaard JC, Tanderup K, Pedersen EM. Correction of diffusion-weighted magnetic resonance imaging for brachytherapy of locally advanced cervical cancer. Acta Oncol 2014; 53:1073-8. [PMID: 25017378 DOI: 10.3109/0284186x.2014.938831] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
BACKGROUND Geometrical distortion is a major obstacle for the use of echo planar diffusion-weighted magnetic resonance imaging (DW-MRI) in planning of radiotherapy. This study compares geometrical distortion correction methods of DW-MRI at time of brachytherapy (BT) in locally advanced cervical cancer patients. MATERIAL AND METHODS In total 21 examinations comprising DW-MRI, dual gradient echo (GRE) for B₀ field map calculation and T2-weighted (T2W) fat-saturated MRI of eight patients with locally advanced cervical cancer were acquired during BT with a plastic tandem and ring applicator in situ. The ability of B0 field map correction (B₀M) and deformable image registration (DIR) to correct DW-MRI geometric image distortion was compared to the non-corrected DW-MRI including evaluation of apparent diffusion coefficient (ADC) for the gross tumor volume (GTV). RESULTS Geometrical distortion correction decreased tandem displacement from 3.3 ± 0.9 mm (non-corrected) to 2.9 ± 1.0 mm (B₀M) and 1.9 ± 0.6 mm (DIR), increased mean normalized cross-correlation from 0.69 ± 0.1 (non- corrected) to 0.70 ± 0.10 (B₀M) and 0.77 ± 0.1 (DIR), and increased the Jaccard similarity coefficient from 0.72 ± 0.1 (non-corrected) to 0.73 ± 0.06 (B₀M) and 0.77 ± 0.1 (DIR). For all parameters only DIR corrections were significant (p < 0.05). ADC of the GTV did not change significantly with either correction method. CONCLUSION DIR significantly improved geometrical accuracy of DW-MRI, with remaining residual uncertainties of less than 2 mm, while no significant improvement was seen using B₀ field map correction.
Collapse
Affiliation(s)
- Søren Haack
- Department of Clinical Engineering, Aarhus University Hospital , Aarhus , Denmark
| | | | | | | | | | | |
Collapse
|
26
|
Robinson EC, Jbabdi S, Glasser MF, Andersson J, Burgess GC, Harms MP, Smith SM, Van Essen DC, Jenkinson M. MSM: a new flexible framework for Multimodal Surface Matching. Neuroimage 2014; 100:414-26. [PMID: 24939340 DOI: 10.1016/j.neuroimage.2014.05.069] [Citation(s) in RCA: 372] [Impact Index Per Article: 37.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Revised: 05/19/2014] [Accepted: 05/27/2014] [Indexed: 10/25/2022] Open
Abstract
Surface-based cortical registration methods that are driven by geometrical features, such as folding, provide sub-optimal alignment of many functional areas due to variable correlation between cortical folding patterns and function. This has led to the proposal of new registration methods using features derived from functional and diffusion imaging. However, as yet there is no consensus over the best set of features for optimal alignment of brain function. In this paper we demonstrate the utility of a new Multimodal Surface Matching (MSM) algorithm capable of driving alignment using a wide variety of descriptors of brain architecture, function and connectivity. The versatility of the framework originates from adapting the discrete Markov Random Field (MRF) registration method to surface alignment. This has the benefit of being very flexible in the choice of a similarity measure and relatively insensitive to local minima. The method offers significant flexibility in the choice of feature set, and we demonstrate the advantages of this by performing registrations using univariate descriptors of surface curvature and myelination, multivariate feature sets derived from resting fMRI, and multimodal descriptors of surface curvature and myelination. We compare the results with two state of the art surface registration methods that use geometric features: FreeSurfer and Spherical Demons. In the future, the MSM technique will allow explorations into the best combinations of features and alignment strategies for inter-subject alignment of cortical functional areas for a wide range of neuroimaging data sets.
Collapse
Affiliation(s)
- Emma C Robinson
- FMRIB centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, UK
| | - Saad Jbabdi
- FMRIB centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, UK
| | - Matthew F Glasser
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St Louis, MO, USA
| | - Jesper Andersson
- FMRIB centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, UK
| | - Gregory C Burgess
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St Louis, MO, USA
| | - Michael P Harms
- Department of Psychiatry, Washington University School of Medicine, St Louis, MO, USA
| | - Stephen M Smith
- FMRIB centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, UK
| | - David C Van Essen
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St Louis, MO, USA
| | - Mark Jenkinson
- FMRIB centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, UK.
| |
Collapse
|
27
|
Oh S, Jaffray D, Cho YB. A novel method to quantify and compare anatomical shape: application in cervix cancer radiotherapy. Phys Med Biol 2014; 59:2687-704. [PMID: 24786841 DOI: 10.1088/0031-9155/59/11/2687] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Adaptive radiation therapy (ART) had been proposed to restore dosimetric deficiencies during treatment delivery. In this paper, we developed a technique of Geometric reLocation for analyzing anatomical OBjects' Evolution (GLOBE) for a numerical model of tumor evolution under radiation therapy and characterized geometric changes of the target using GLOBE. A total of 174 clinical target volumes (CTVs) obtained from 32 cervical cancer patients were analyzed. GLOBE consists of three main steps; step (1) deforming a 3D surface object to a sphere by parametric active contour (PAC), step (2) sampling a deformed PAC on 642 nodes of icosahedron geodesic dome for reference frame, and step (3) unfolding 3D data to 2D plane for convenient visualization and analysis. The performance was evaluated with respect to (1) convergence of deformation (iteration number and computation time) and (2) accuracy of deformation (residual deformation). Based on deformation vectors from planning CTV to weekly CTVs, target specific (TS) margins were calculated on each sampled node of GLOBE and the systematic (Σ) and random (σ) variations of the vectors were calculated. Population based anisotropic (PBA) margins were generated using van Herk's margin recipe. GLOBE successfully modeled 152 CTVs from 28 patients. Fast convergence was observed for most cases (137/152) with the iteration number of 65 ± 74 (average ± STD) and the computation time of 13.7 ± 18.6 min. Residual deformation of PAC was 0.9 ± 0.7 mm and more than 97% was less than 3 mm. Margin analysis showed random nature of TS-margin. As a consequence, PBA-margins perform similarly to ISO-margins. For example, PBA-margins for 90% patients' coverage with 95% dose level is close to 13 mm ISO-margins in the aspect of target coverage and OAR sparing. GLOBE demonstrates a systematic analysis of tumor motion and deformation of patients with cervix cancer during radiation therapy and numerical modeling of PBA-margin on 642 locations of CTV surface.
Collapse
Affiliation(s)
- Seungjong Oh
- Radiation Medicine Program, Princess Margaret Cancer Center, University Health Network, Canada
| | | | | |
Collapse
|
28
|
Chen X, Egger J. Development of an open source software module for enhanced visualization during MR-guided interstitial gynecologic brachytherapy. SPRINGERPLUS 2014; 3:167. [PMID: 24790816 PMCID: PMC4004789 DOI: 10.1186/2193-1801-3-167] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Accepted: 01/02/2014] [Indexed: 11/17/2022]
Abstract
In 2010, gynecologic malignancies were the 4th leading cause of death in U.S. women and for patients with extensive primary or recurrent disease, treatment with interstitial brachytherapy may be an option. However, brachytherapy requires precise insertion of hollow catheters with introducers into the tumor in order to eradicate the cancer. In this study, a software solution to assist interstitial gynecologic brachytherapy has been investigated and the software has been realized as an own module under (3D) Slicer, which is a free open source software platform for (translational) biomedical research. The developed research module allows on-time processing of intra-operative magnetic resonance imaging (iMRI) data over a direct DICOM connection to a MR scanner. Afterwards follows a multi-stage registration of CAD models of the medical brachytherapy devices (template, obturator) to the patient’s MR images, enabling the virtual placement of interstitial needles to assist the physician during the intervention.
Collapse
Affiliation(s)
- Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical engineering, Shanghai Jiao Tong University, Dong Chuan Road 800, Shanghai, Post Code: 200240 China
| | - Jan Egger
- Department of Medicine, University Hospital of Giessen and Marburg (UKGM), Baldingerstraße, Marburg, 35043 Germany
| |
Collapse
|
29
|
Rivaz H, Karimaghaloo Z, Fonov VS, Collins DL. Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:708-725. [PMID: 24595344 DOI: 10.1109/tmi.2013.2294630] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Mutual information (MI) quantifies the information that is shared between two random variables and has been widely used as a similarity metric for multi-modal and uni-modal image registration. A drawback of MI is that it only takes into account the intensity values of corresponding pixels and not of neighborhoods. Therefore, it treats images as "bag of words" and the contextual information is lost. In this work, we present Contextual Conditioned Mutual Information (CoCoMI), which conditions MI estimation on similar structures. Our rationale is that it is more likely for similar structures to undergo similar intensity transformations. The contextual analysis is performed on one of the images offline. Therefore, CoCoMI does not significantly change the registration time. We use CoCoMI as the similarity measure in a regularized cost function with a B-spline deformation field and efficiently optimize the cost function using a stochastic gradient descent method. We show that compared to the state of the art local MI based similarity metrics, CoCoMI does not distort images to enforce erroneous identical intensity transformations for different image structures. We further present the results on nonrigid registration of ultrasound (US) and magnetic resonance (MR) patient data from image-guided neurosurgery trials performed in our institute and publicly available in the BITE dataset. We show that CoCoMI performs significantly better than the state of the art similarity metrics in US to MR registration. It reduces the average mTRE over 13 patients from 4.12 mm to 2.35 mm, and the maximum mTRE from 9.38 mm to 3.22 mm.
Collapse
|
30
|
Self-similarity weighted mutual information: A new nonrigid image registration metric. Med Image Anal 2014; 18:343-58. [DOI: 10.1016/j.media.2013.12.003] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 10/07/2013] [Accepted: 12/07/2013] [Indexed: 11/19/2022]
|
31
|
Bagci U, Foster B, Miller-Jaster K, Luna B, Dey B, Bishai WR, Jonsson CB, Jain S, Mollura DJ. A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging. EJNMMI Res 2013; 3:55. [PMID: 23879987 PMCID: PMC3734217 DOI: 10.1186/2191-219x-3-55] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2013] [Accepted: 07/06/2013] [Indexed: 12/19/2022] Open
Abstract
Background Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases. Methods We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Results Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. Conclusions We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
Collapse
Affiliation(s)
- Ulas Bagci
- Center for Infectious Disease Imaging, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Sotiras A, Davatzikos C, Paragios N. Deformable medical image registration: a survey. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1153-90. [PMID: 23739795 PMCID: PMC3745275 DOI: 10.1109/tmi.2013.2265603] [Citation(s) in RCA: 558] [Impact Index Per Article: 50.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.
Collapse
Affiliation(s)
- Aristeidis Sotiras
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Christos Davatzikos
- Section of Biomedical Image Analysis, Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Nikos Paragios
- Center for Visual Computing, Department of Applied Mathematics, Ecole Centrale de Paris, Chatenay-Malabry, 92 295 FRANCE, the Equipe Galen, INRIA Saclay - Ile-de-France, Orsay, 91893 FRANCE and the Universite Paris-Est, LIGM (UMR CNRS), Center for Visual Computing, Ecole des Ponts ParisTech, Champs-sur-Marne, 77455 FRANCE
| |
Collapse
|
33
|
Towards realtime multimodal fusion for image-guided interventions using self-similarities. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:187-94. [PMID: 24505665 DOI: 10.1007/978-3-642-40811-3_24] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Image-guided interventions often rely on deformable multimodal registration to align pre-treatment and intra-operative scans. There are a number of requirements for automated image registration for this task, such as a robust similarity metric for scans of different modalities with different noise distributions and contrast, an efficient optimisation of the cost function to enable fast registration for this time-sensitive application, and an insensitive choice of registration parameters to avoid delays in practical clinical use. In this work, we build upon the concept of structural image representation for multi-modal similarity. Discriminative descriptors are densely extracted for the multi-modal scans based on the "self-similarity context". An efficient quantised representation is derived that enables very fast computation of point-wise distances between descriptors. A symmetric multi-scale discrete optimisation with diffusion reguIarisation is used to find smooth transformations. The method is evaluated for the registration of 3D ultrasound and MRI brain scans for neurosurgery and demonstrates a significantly reduced registration error (on average 2.1 mm) compared to commonly used similarity metrics and computation times of less than 30 seconds per 3D registration.
Collapse
|
34
|
Multimodal surface matching: fast and generalisable cortical registration using discrete optimisation. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2013; 23:475-86. [PMID: 24683992 DOI: 10.1007/978-3-642-38868-2_40] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Group neuroimaging studies of the cerebral cortex benefit from accurate, surface-based, cross-subject alignment for investigating brain architecture, function and connectivity. There is an increasing amount of high quality data available. However, establishing how different modalities correlate across groups remains an open research question. One reason for this is that the current methods for registration, based on cortical folding, provide sub-optimal alignment of some functional subregions of the brain. A more flexible framework is needed that will allow robust alignment of multiple modalities. We adapt the Fast Primal-Dual (Fast-PD) approach for discrete Markov Random Field (MRF) optimisation to spherical registration by reframing the deformation labels as a discrete set of rotations and propose a novel regularisation term, derived from the geodesic distance between rotation matrices. This formulation allows significant flexibility in the choice of similarity metric. To this end we propose a new multivariate cost function based on the discretisation of a graph-based mutual information measure. Results are presented for alignment driven by scalar metrics of curvature and myelination, and multivariate features derived from functional task performance. These experiments demonstrate the potential of this approach for improving the integration of complementary brain data sets in the future.
Collapse
|
35
|
Li D, Li H, Wan H, Chen J, Gong G, Wang H, Wang L, Yin Y. Multiscale registration of medical images based on edge preserving scale space with application in image-guided radiation therapy. Phys Med Biol 2012; 57:5187-204. [DOI: 10.1088/0031-9155/57/16/5187] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
36
|
Lu C, Chelikani S, Jaffray DA, Milosevic MF, Staib LH, Duncan JS. Simultaneous nonrigid registration, segmentation, and tumor detection in MRI guided cervical cancer radiation therapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1213-27. [PMID: 22328178 PMCID: PMC3889159 DOI: 10.1109/tmi.2012.2186976] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician.
Collapse
Affiliation(s)
- Chao Lu
- Department of Electrical Engineering, School of Engineering and Applied Science, Yale University, New Haven, CT 06520, USA.
| | | | | | | | | | | |
Collapse
|
37
|
Abstract
This paper presents a review of automated image registration methodologies that have been used in the medical field. The aim of this paper is to be an introduction to the field, provide knowledge on the work that has been developed and to be a suitable reference for those who are looking for registration methods for a specific application. The registration methodologies under review are classified into intensity or feature based. The main steps of these methodologies, the common geometric transformations, the similarity measures and accuracy assessment techniques are introduced and described.
Collapse
Affiliation(s)
- Francisco P M Oliveira
- a Instituto de Engenharia Mecânica e Gestão Industrial, Faculdade de Engenharia, Universidade do Porto , Rua Dr. Roberto Frias, 4200-465 , Porto , Portugal
| | | |
Collapse
|
38
|
|
39
|
Iglesias JE, Sabuncu MR, Van Leemput K. A Generative Model for Probabilistic Label Fusion of Multimodal Data. ACTA ACUST UNITED AC 2012; 7509:115-133. [PMID: 25685856 DOI: 10.1007/978-3-642-33530-3_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for intramodality scenarios, it remains relatively unexplored when the nature of the target data is multimodal or when its modality is different from that of the atlases. In this paper, we review the literature on label fusion methods and also present an extension of our previously published algorithm to the general case in which the target data are multimodal. The method is based on a generative model that exploits the consistency of voxel intensities within the target scan based on the current estimate of the segmentation. Using brain MRI scans acquired with a multiecho FLASH sequence, we compare the method with majority voting, statistical-atlas-based segmentation, the popular package FreeSurfer and an adaptive local multi-atlas segmentation method. The results show that our approach produces highly accurate segmentations (Dice 86.3% across 22 brain structures of interest), outperforming the competing methods.
Collapse
Affiliation(s)
| | | | - Koen Van Leemput
- Departments of Information and Computer Science and of Biomedical Engineering and Computational Science, Aalto University, Finland
| |
Collapse
|
40
|
Abstract
Nonrigid image registration methods based on the optimization of information-theoretic measures provide versatile solutions for robustly aligning mono-modal data with nonlinear variations and multi-modal data in radiology. Whereas mutual information and its variations arise as a first choice, generalized information measures offer relevant alternatives in specific clinical contexts, Their usual application setting is the alignement of image pairs by statistically matching scalar random variables (generally, greylevel distributions), handled via their probability densities. In this paper, we address the issue of estimating and optimizing generalized information measures over high-dimensional state spaces to derive multi-feature statistical nonrigid registration models. Specifically, we introduce novel consistent and asymptotically unbiaised kappa nearest neighbors estimators of alpha-informations, and study their variational optimization over finite and infinite dimensional smooth transform spaces. The resulting theoretical framework provides a well-posed and computationally efficient alternative to entropic graph techniques. Its performances are assessed on two cardiological applications: measuring myocardial deformations in tagged MRI, and compensating cardio-thoracic motions in perfusion MRI.
Collapse
|
41
|
Kotsas P, Dodd T. Rigid registration of medical images using 1D and 2D binary projections. J Digit Imaging 2011; 24:913-25. [PMID: 21086018 PMCID: PMC3180551 DOI: 10.1007/s10278-010-9352-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
Image registration is a necessary procedure in everyday clinical practice. Several techniques for rigid and non-rigid registration have been developed and tested and the state-of-the-art is evolving from the research setting to incorporate image registration techniques into clinically useful tools. In this paper, we develop a novel rigid medical image registration technique which incorporates binary projections. This technique is tested and compared to the standard mutual information (MI) methods. Results show that the method is significantly more accurate and robust compared to MI methods. The accuracy is well below 0.5° and 0.5 mm. This method introduces two more improvements over MI methods: (1)for 2D registration with the use of 1D binary projections, we use minimal interpolation; and (2) for 3D registration with the use of 2D binary projections the method converges to stable final positions, independent of the initial misregistration.
Collapse
Affiliation(s)
- Panayiotis Kotsas
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK.
| | | |
Collapse
|
42
|
Chappelow J, Bloch BN, Rofsky N, Genega E, Lenkinski R, DeWolf W, Madabhushi A. Elastic registration of multimodal prostate MRI and histology via multiattribute combined mutual information. Med Phys 2011; 38:2005-18. [PMID: 21626933 DOI: 10.1118/1.3560879] [Citation(s) in RCA: 81] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE By performing registration of preoperative multiprotocol in vivo magnetic resonance (MR) images of the prostate with corresponding whole-mount histology (WMH) sections from postoperative radical prostatectomy specimens, an accurate estimate of the spatial extent of prostate cancer (CaP) on in vivo MR imaging (MRI) can be retrospectively established. This could allow for definition of quantitative image-based disease signatures and lead to development of classifiers for disease detection on multiprotocol in vivo MRI. Automated registration of MR and WMH images of the prostate is complicated by dissimilar image intensities, acquisition artifacts, and nonlinear shape differences. METHODS The authors present a method for automated elastic registration of multiprotocol in vivo MRI and WMH sections of the prostate. The method, multiattribute combined mutual information (MACMI), leverages all available multiprotocol image data to drive image registration using a multivariate formulation of mutual information. RESULTS Elastic registration using the multivariate MI formulation is demonstrated for 150 corresponding sets of prostate images from 25 patient studies with T2-weighted and dynamic-contrast enhanced MRI and 85 image sets from 15 studies with an additional functional apparent diffusion coefficient MRI series. Qualitative results of MACMI evaluation via visual inspection suggest that an accurate delineation of CaP extent on MRI is obtained. Results of quantitative evaluation on 150 clinical and 20 synthetic image sets indicate improved registration accuracy using MACMI compared to conventional pairwise mutual information-based approaches. CONCLUSIONS The authors' approach to the registration of in vivo multiprotocol MRI and ex vivo WMH of the prostate using MACMI is unique, in that (1) information from all available image protocols is utilized to drive the registration with histology, (2) no additional, intermediate ex vivo radiology or gross histology images need be obtained in addition to the routinely acquired in vivo MRI series, and (3) no corresponding anatomical landmarks are required to be identified manually or automatically on the images.
Collapse
Affiliation(s)
- Jonathan Chappelow
- Department of Biomedical Engineering, Rutgers University, Piscataway, New Jersey 08854, USA
| | | | | | | | | | | | | |
Collapse
|
43
|
Wang Q, Wu G, Yap PT, Shen D. Attribute vector guided groupwise registration. Neuroimage 2010; 50:1485-96. [PMID: 20097291 DOI: 10.1016/j.neuroimage.2010.01.040] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2009] [Revised: 12/12/2009] [Accepted: 01/11/2010] [Indexed: 11/16/2022] Open
Abstract
Groupwise registration has been recently introduced to simultaneously register a group of images by avoiding the selection of a particular template. To achieve this, several methods have been proposed to take advantage of information-theoretic entropy measures based on image intensity. However, simplistic utilization of voxelwise image intensity is not sufficient to establish reliable correspondences, since it lacks important contextual information. Therefore, we explore the notion of attribute vector as the voxel signature, instead of image intensity, to guide the correspondence detection in groupwise registration. In particular, for each voxel, the attribute vector is computed from its multi-scale neighborhoods, in order to capture the geometric information at different scales. The probability density function (PDF) of each element in the attribute vector is then estimated from the local neighborhood, providing a statistical summary of the underlying anatomical structure in that local pattern. Eventually, with the help of Jensen-Shannon (JS) divergence, a group of subjects can be aligned simultaneously by minimizing the sum of JS divergences across the image domain and all attributes. We have employed our groupwise registration algorithm on both real (NIREP NA0 data set) and simulated data (12 pairs of normal control and simulated atrophic data set). The experimental results demonstrate that our method yields better registration accuracy, compared with a popular groupwise registration method.
Collapse
Affiliation(s)
- Qian Wang
- Department of Computer Science, University of North Carolina at Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|
44
|
Klein S, Staring M, Murphy K, Viergever MA, Pluim JPW. elastix: a toolbox for intensity-based medical image registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:196-205. [PMID: 19923044 DOI: 10.1109/tmi.2009.2035616] [Citation(s) in RCA: 2357] [Impact Index Per Article: 168.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.
Collapse
Affiliation(s)
- Stefan Klein
- University Medical Center Utrecht, Image Sciences Institute, 3508 GA Utrecht, The Netherlands.
| | | | | | | | | |
Collapse
|
45
|
van der Put RW, Kerkhof EM, Raaymakers BW, Jürgenliemk-Schulz IM, Lagendijk JJW. Contour propagation in MRI-guided radiotherapy treatment of cervical cancer: the accuracy of rigid, non-rigid and semi-automatic registrations. Phys Med Biol 2009; 54:7135-50. [DOI: 10.1088/0031-9155/54/23/007] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
46
|
Nonrigid Registration of Brain Tumor Resection MR Images Based on Joint Saliency Map and Keypoint Clustering. SENSORS 2009; 9:10270-90. [PMID: 22303173 PMCID: PMC3267221 DOI: 10.3390/s91210270] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2009] [Revised: 12/01/2009] [Accepted: 12/09/2009] [Indexed: 11/25/2022]
Abstract
This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM), is maximized to achieve a global rigid registration of the two images. Being detected and clustered at the paired contiguous matching areas in the globally registered images, the paired pools of DoG keypoints in combination with the JSM provide a useful cluster-to-cluster correspondence to guide the local control-point correspondence detection and the outlier keypoint rejection. Lastly, a quasi-inverse consistent deformation is smoothly approximated to locally register brain images through the mapping the clustered control points by compact support radial basis functions. The 2D implementation of the method can model the brain shift in brain tumor resection MR images, though the theory holds for the 3D case.
Collapse
|