1
|
Walluscheck S, Canalini L, Strohm H, Diekmann S, Klein J, Heldmann S. MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs. Int J Comput Assist Radiol Surg 2023; 18:483-491. [PMID: 36334164 PMCID: PMC9939492 DOI: 10.1007/s11548-022-02786-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 10/25/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE Computed tomography (CT) is widely used to identify anomalies in brain tissues because their localization is important for diagnosis and therapy planning. Due to the insufficient soft tissue contrast of CT, the division of the brain into anatomical meaningful regions is challenging and is commonly done with magnetic resonance imaging (MRI). METHODS We propose a multi-atlas registration approach to propagate anatomical information from a standard MRI brain atlas to CT scans. This translation will enable a detailed automated reporting of brain CT exams. We utilize masks of the lateral ventricles and the brain volume of CT images as adjuvant input to guide the registration process. Besides using manual annotations to test the registration in a first step, we then verify that convolutional neural networks (CNNs) are a reliable solution for automatically segmenting structures to enhance the registration process. RESULTS The registration method obtains mean Dice values of 0.92 and 0.99 in brain ventricles and parenchyma on 22 healthy test cases when using manually segmented structures as guidance. When guiding with automatically segmented structures, the mean Dice values are 0.87 and 0.98, respectively. CONCLUSION Our registration approach is a fully automated solution to register MRI atlas images to CT scans and thus obtain detailed anatomical information. The proposed CNN segmentation method can be used to obtain masks of ventricles and brain volume which guide the registration.
Collapse
Affiliation(s)
- Sina Walluscheck
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Luca Canalini
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hannah Strohm
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Susanne Diekmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Stefan Heldmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
2
|
Li J, Qu Z, Yang Y, Zhang F, Li M, Hu S. TCGAN: a transformer-enhanced GAN for PET synthetic CT. BIOMEDICAL OPTICS EXPRESS 2022; 13:6003-6018. [PMID: 36733758 PMCID: PMC9872870 DOI: 10.1364/boe.467683] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/06/2022] [Accepted: 10/05/2022] [Indexed: 06/18/2023]
Abstract
Multimodal medical images can be used in a multifaceted approach to resolve a wide range of medical diagnostic problems. However, these images are generally difficult to obtain due to various limitations, such as cost of capture and patient safety. Medical image synthesis is used in various tasks to obtain better results. Recently, various studies have attempted to use generative adversarial networks for missing modality image synthesis, making good progress. In this study, we propose a generator based on a combination of transformer network and a convolutional neural network (CNN). The proposed method can combine the advantages of transformers and CNNs to promote a better detail effect. The network is designed for positron emission tomography (PET) to computer tomography synthesis, which can be used for PET attenuation correction. We also experimented on two datasets for magnetic resonance T1- to T2-weighted image synthesis. Based on qualitative and quantitative analyses, our proposed method outperforms the existing methods.
Collapse
Affiliation(s)
- Jitao Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Zongjin Qu
- College of Chemistry and Chemical Engineering, Linyi University, Linyi, 276000, China
- These authors contributed equally
| | - Yue Yang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Fuchun Zhang
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Meng Li
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| | - Shunbo Hu
- College of Information Science and Engineering, Linyi University, Linyi, 276000, China
| |
Collapse
|
3
|
Heo JU, Zhou F, Jones R, Zheng J, Song X, Qian P, Baydoun A, Traughber MS, Kuo JW, Helo RA, Thompson C, Avril N, DeVincent D, Hunt H, Gupta A, Faraji N, Kharouta MZ, Kardan A, Bitonte D, Langmack CB, Nelson A, Kruzer A, Yao M, Dorth J, Nakayama J, Waggoner SE, Biswas T, Harris E, Sandstrom S, Traughber BJ, Muzic RF. Abdominopelvic MR to CT registration using a synthetic CT intermediate. J Appl Clin Med Phys 2022; 23:e13731. [PMID: 35920116 PMCID: PMC9512351 DOI: 10.1002/acm2.13731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 04/25/2022] [Accepted: 06/27/2022] [Indexed: 11/21/2022] Open
Abstract
Accurate coregistration of computed tomography (CT) and magnetic resonance (MR) imaging can provide clinically relevant and complementary information and can serve to facilitate multiple clinical tasks including surgical and radiation treatment planning, and generating a virtual Positron Emission Tomography (PET)/MR for the sites that do not have a PET/MR system available. Despite the long‐standing interest in multimodality co‐registration, a robust, routine clinical solution remains an unmet need. Part of the challenge may be the use of mutual information (MI) maximization and local phase difference (LPD) as similarity metrics, which have limited robustness, efficiency, and are difficult to optimize. Accordingly, we propose registering MR to CT by mapping the MR to a synthetic CT intermediate (sCT) and further using it in a sCT‐CT deformable image registration (DIR) that minimizes the sum of squared differences. The resultant deformation field of a sCT‐CT DIR is applied to the MRI to register it with the CT. Twenty‐five sets of abdominopelvic imaging data are used for evaluation. The proposed method is compared to standard MI‐ and LPD‐based methods, and the multimodality DIR provided by a state of the art, commercially available FDA‐cleared clinical software package. The results are compared using global similarity metrics, Modified Hausdorff Distance, and Dice Similarity Index on six structures. Further, four physicians visually assessed and scored registered images for their registration accuracy. As evident from both quantitative and qualitative evaluation, the proposed method achieved registration accuracy superior to LPD‐ and MI‐based methods and can refine the results of the commercial package DIR when using its results as a starting point. Supported by these, this manuscript concludes the proposed registration method is more robust, accurate, and efficient than the MI‐ and LPD‐based methods.
Collapse
Affiliation(s)
- Jin Uk Heo
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Feifei Zhou
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Robert Jones
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Jiamin Zheng
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Xin Song
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Pengjiang Qian
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Atallah Baydoun
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Internal Medicine, Louis Stokes Cleveland VA Medical Center, Cleveland, Ohio, USA
| | - Melanie S Traughber
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Jung-Wen Kuo
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Rose Al Helo
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Cheryl Thompson
- Department of Public Health Sciences, Penn State College of Medicine, Hershey, Pennsylvania, USA
| | - Norbert Avril
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Daniel DeVincent
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Harold Hunt
- Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Amit Gupta
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Navid Faraji
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Michael Z Kharouta
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Arash Kardan
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - David Bitonte
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Christian B Langmack
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | | | | | - Min Yao
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Jennifer Dorth
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - John Nakayama
- Department of Obstetrics and Gynecology, Allegheny Health Network, Pittsburgh, Pennsylvania, USA
| | - Steven E Waggoner
- Department of Obstetrics and Gynecology, Cleveland Clinic, Cleveland, Ohio, USA
| | - Tithi Biswas
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Eleanor Harris
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA.,Department of Radiation Oncology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Susan Sandstrom
- Department of Radiation Oncology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| | - Bryan J Traughber
- Department of Radiation Oncology, Penn State University, Hershey, Pennsylvania, USA
| | - Raymond F Muzic
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
4
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|
5
|
Wu J, Zhou S, Yang Q, Zhang Y, Tang X. Multi-modality Large Deformation Diffeomorphic Metric Mapping Driven by Single-modality Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2610-2613. [PMID: 34891788 DOI: 10.1109/embc46164.2021.9630617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multi-modality magnetic resonance image (MRI) registration is an essential step in various MRI analysis tasks. However, it is challenging to have all required modalities in clinical practice, and thus the application of multi-modality registration is limited. This paper tackles such problem by proposing a novel unsupervised deep learning based multi-modality large deformation diffeomorphic metric mapping (LDDMM) framework which is capable of performing multi-modality registration only using single-modality MRIs. Specifically, an unsupervised image-to-image translation model is trained and used to synthesize the missing modality MRIs from the available ones. Multi-modality LDDMM is then performed in a multi-channel manner. Experimental results obtained on one publicly- accessible datasets confirm the superior performance of the proposed approach.Clinical relevance-This work provides a tool for multi-modality MRI registration with solely single-modality images, which addresses the very common issue of missing modalities in clinical practice.
Collapse
|
6
|
Wu J, Zhou S. A Disentangled Representations based Unsupervised Deformable Framework for Cross-modality Image Registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3531-3534. [PMID: 34892001 DOI: 10.1109/embc46164.2021.9630778] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cross-modality magnetic resonance image (MRI) registration is a fundamental step in various MRI analysis tasks. However, it remains challenging due to the domain shift between different modalities. In this paper, we proposed a fully unsupervised deformable framework for cross-modality image registration through image disentangling. To be specific, MRIs of both modalities are decomposed into a shared domain-invariant content space and domain-specific style spaces via a multi-modal unsupervised image-to-image translation approach. An unsupervised deformable network is then built based on the assumption that intrinsic information in the content space is preserved among different modalities. In addition, we proposed a novel loss function consists of two metrics, with one defined in the original image space and the other in the content space. Validation experiments were performed on two datasets. Compared to two conventional state-of-the-art cross-modality registration methods, the proposed framework shows a superior registration performance.Clinical relevance-This work can serve as an auxiliary tool for cross-modality registration in clinical practice.
Collapse
|
7
|
Szalkowski G, Nie D, Zhu T, Yap PT, Lian J. Synthetic digital reconstructed radiographs for MR-only robotic stereotactic radiation therapy: A proof of concept. Comput Biol Med 2021; 138:104917. [PMID: 34688037 DOI: 10.1016/j.compbiomed.2021.104917] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 09/16/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE To create synthetic CTs and digital reconstructed radiographs (DRRs) from MR images that allow for fiducial visualization and accurate dose calculation for MR-only radiosurgery. METHODS We developed a machine learning model to create synthetic CTs from pelvic MRs for prostate treatments. This model has been previously proven to generate synthetic CTs with accuracy on par or better than alternate methods, such as atlas-based registration. Our dataset consisted of 11 paired CT and conventional MR (T2) images used for previous CyberKnife (Accuray, Inc) radiotherapy treatments. The MR images were pre-processed to mimic the appearance of fiducial-enhancing images. Two models were trained for each parameter case, using a sub-set of the available image pairs, with the remaining images set aside for testing and validation of the model to identify the optimal patch size and number of image pairs used for training. Four models were then trained using the identified parameters and used to generate synthetic CTs, which in turn were used to generate DRRs at angles 45° and 315°, as would be used for a CyberKnife treatment. The synthetic CTs and DRRs were compared visually and using the mean squared error and peak signal-to-noise ratio against the ground-truth images to evaluate their similarity. RESULTS The synthetic CTs, as well as the DRRs generated from them, gave similar visualization of the fiducial markers in the prostate as the true counterparts. There was no significant difference found for the fiducial localization for the CTs and DRRs. Across the 8 DRRs analyzed, the mean MSE between the normalized true and synthetic DRRs was 0.66 ± 0.42% and the mean PSNR for this region was 22.9 ± 3.7 dB. For the full CTs, the mean MAE was 72.9 ± 88.1 HU and the mean PSNR was 31.2 ± 2.2 dB. CONCLUSIONS Our machine learning-based method provides a proof of concept of a way to generate synthetic CTs and DRRs for accurate dose calculation and fiducial localization for use in radiation treatment of the prostate.
Collapse
Affiliation(s)
- Gregory Szalkowski
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA
| | - Dong Nie
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA
| | - Tong Zhu
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC, USA.
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
8
|
Lan H, Toga AW, Sepehrband F. Three-dimensional self-attention conditional GAN with spectral normalization for multimodal neuroimaging synthesis. Magn Reson Med 2021; 86:1718-1733. [PMID: 33961321 DOI: 10.1002/mrm.28819] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 04/05/2021] [Accepted: 04/07/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE To develop a new 3D generative adversarial network that is designed and optimized for the application of multimodal 3D neuroimaging synthesis. METHODS We present a 3D conditional generative adversarial network (GAN) that uses spectral normalization and feature matching to stabilize the training process and ensure optimization convergence (called SC-GAN). A self-attention module was also added to model the relationships between widely separated image voxels. The performance of the network was evaluated on the data set from ADNI-3, in which the proposed network was used to predict PET images, fractional anisotropy, and mean diffusivity maps from multimodal MRI. Then, SC-GAN was applied on a multidimensional diffusion MRI experiment for superresolution application. Experiment results were evaluated by normalized RMS error, peak SNR, and structural similarity. RESULTS In general, SC-GAN outperformed other state-of-the-art GAN networks including 3D conditional GAN in all three tasks across all evaluation metrics. Prediction error of the SC-GAN was 18%, 24% and 29% lower compared to 2D conditional GAN for fractional anisotropy, PET and mean diffusivity tasks, respectively. The ablation experiment showed that the major contributors to the improved performance of SC-GAN are the adversarial learning and the self-attention module, followed by the spectral normalization module. In the superresolution multidimensional diffusion experiment, SC-GAN provided superior predication in comparison to 3D Unet and 3D conditional GAN. CONCLUSION In this work, an efficient end-to-end framework for multimodal 3D medical image synthesis (SC-GAN) is presented. The source code is also made available at https://github.com/Haoyulance/SC-GAN.
Collapse
Affiliation(s)
- Haoyu Lan
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | | | - Arthur W Toga
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Alzheimer's Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Farshid Sepehrband
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Alzheimer's Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
9
|
Generation of annotated multimodal ground truth datasets for abdominal medical image registration. Int J Comput Assist Radiol Surg 2021; 16:1277-1285. [PMID: 33934313 PMCID: PMC8295129 DOI: 10.1007/s11548-021-02372-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 04/08/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets. METHODS We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac-torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom; therefore, the generated dataset can serve as ground truth for image segmentation and registration. Realistic simulation of respiration and heartbeat is possible within the XCAT framework. To underline the usability as a registration ground truth, a proof of principle registration is performed. RESULTS Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging, computed tomography (CT), and cone beam CT images are inherently co-registered. Thus, the synthetic dataset allowed us to optimize registration parameters of a multimodal non-rigid registration, utilizing liver organ masks for evaluation. CONCLUSION Our proposed framework provides not only annotated but also multimodal synthetic data which can serve as a ground truth for various tasks in medical imaging processing. We demonstrated the applicability of synthetic data for the development of multimodal medical image registration algorithms.
Collapse
|
10
|
Bourbonne V, Jaouen V, Hognon C, Boussion N, Lucia F, Pradier O, Bert J, Visvikis D, Schick U. Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy. Cancers (Basel) 2021; 13:1082. [PMID: 33802499 PMCID: PMC7959466 DOI: 10.3390/cancers13051082] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Stereotactic radiotherapy (SRT) has become widely accepted as a treatment of choice for patients with a small number of brain metastases that are of an acceptable size, allowing for better target dose conformity, resulting in high local control rates and better sparing of organs at risk. An MRI-only workflow could reduce the risk of misalignment between magnetic resonance imaging (MRI) brain studies and computed tomography (CT) scanning for SRT planning, while shortening delays in planning. Given the absence of a calibrated electronic density in MRI, we aimed to assess the equivalence of synthetic CTs generated by a generative adversarial network (GAN) for planning in the brain SRT setting. METHODS All patients with available MRIs and treated with intra-cranial SRT for brain metastases from 2014 to 2018 in our institution were included. After co-registration between the diagnostic MRI and the planning CT, a synthetic CT was generated using a 2D-GAN (2D U-Net). Using the initial treatment plan (Pinnacle v9.10, Philips Healthcare), dosimetric comparison was performed using main dose-volume histogram (DVH) endpoints in respect to ICRU 91 guidelines (Dmax, Dmean, D2%, D50%, D98%) as well as local and global gamma analysis with 1%/1 mm, 2%/1 mm and 2%/2 mm criteria and a 10% threshold to the maximum dose. t-test analysis was used for comparison between the two cohorts (initial and synthetic dose maps). RESULTS 184 patients were included, with 290 treated brain metastases. The mean number of treated lesions per patient was 1 (range 1-6) and the median planning target volume (PTV) was 6.44 cc (range 0.12-45.41). Local and global gamma passing rates (2%/2 mm) were 99.1 CI95% (98.1-99.4) and 99.7 CI95% (99.6-99.7) respectively (CI: confidence interval). DVHs were comparable, with no significant statistical differences regarding ICRU 91's endpoints. CONCLUSIONS Our study is the first to compare GAN-generated CT scans from diagnostic brain MRIs with initial CT scans for the planning of brain stereotactic radiotherapy. We found high similarity between the planning CT and the synthetic CT for both the organs at risk and the target volumes. Prospective validation is under investigation at our institution.
Collapse
Affiliation(s)
- Vincent Bourbonne
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Vincent Jaouen
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
- Institut Mines-Télécom Atlantique, 29200 Brest, France
| | - Clément Hognon
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Nicolas Boussion
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - François Lucia
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Olivier Pradier
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Julien Bert
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Dimitris Visvikis
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Ulrike Schick
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| |
Collapse
|
11
|
Bermudez C, Remedios SW, Ramadass K, McHugo M, Heckers S, Huo Y, Landman BA. Generalizing deep whole-brain segmentation for post-contrast MRI with transfer learning. J Med Imaging (Bellingham) 2020; 7:064004. [PMID: 33381612 PMCID: PMC7757519 DOI: 10.1117/1.jmi.7.6.064004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 12/01/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Generalizability is an important problem in deep neural networks, especially with variability of data acquisition in clinical magnetic resonance imaging (MRI). Recently, the spatially localized atlas network tiles (SLANT) can effectively segment whole brain, non-contrast T1w MRI with 132 volumetric labels. Transfer learning (TL) is a commonly used domain adaptation tool to update the neural network weights for local factors, yet risks degradation of performance on the original validation/test cohorts. Approach: We explore TL using unlabeled clinical data to address these concerns in the context of adapting SLANT to scanning protocol variations. We optimize whole-brain segmentation on heterogeneous clinical data by leveraging 480 unlabeled pairs of clinically acquired T1w MRI with and without intravenous contrast. We use labels generated on the pre-contrast image to train on the post-contrast image in a five-fold cross-validation framework. We further validated on a withheld test set of 29 paired scans over a different acquisition domain. Results: Using TL, we improve reproducibility across imaging pairs measured by the reproducibility Dice coefficient (rDSC) between the pre- and post-contrast image. We showed an increase over the original SLANT algorithm (rDSC 0.82 versus 0.72) and the FreeSurfer v6.0.1 segmentation pipeline ( rDSC = 0.53 ). We demonstrate the impact of this work decreasing the root-mean-squared error of volumetric estimates of the hippocampus between paired images of the same subject by 67%. Conclusion: This work demonstrates a pipeline for unlabeled clinical data to translate algorithms optimized for research data to generalize toward heterogeneous clinical acquisitions.
Collapse
Affiliation(s)
- Camilo Bermudez
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Samuel W. Remedios
- Henry Jackson Foundation, Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, United States
| | - Karthik Ramadass
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| | - Maureen McHugo
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Stephan Heckers
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| |
Collapse
|
12
|
Keikhosravi A, Li B, Liu Y, Conklin MW, Loeffler AG, Eliceiri KW. Non-disruptive collagen characterization in clinical histopathology using cross-modality image synthesis. Commun Biol 2020; 3:414. [PMID: 32737412 PMCID: PMC7395097 DOI: 10.1038/s42003-020-01151-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 07/16/2020] [Indexed: 12/20/2022] Open
Abstract
The importance of fibrillar collagen topology and organization in disease progression and prognostication in different types of cancer has been characterized extensively in many research studies. These explorations have either used specialized imaging approaches, such as specific stains (e.g., picrosirius red), or advanced and costly imaging modalities (e.g., second harmonic generation imaging (SHG)) that are not currently in the clinical workflow. To facilitate the analysis of stromal biomarkers in clinical workflows, it would be ideal to have technical approaches that can characterize fibrillar collagen on standard H&E stained slides produced during routine diagnostic work. Here, we present a machine learning-based stromal collagen image synthesis algorithm that can be incorporated into existing H&E-based histopathology workflow. Specifically, this solution applies a convolutional neural network (CNN) directly onto clinically standard H&E bright field images to extract information about collagen fiber arrangement and alignment, without requiring additional specialized imaging stains, systems or equipment.
Collapse
Affiliation(s)
- Adib Keikhosravi
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
| | - Bin Li
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | - Yuming Liu
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
| | - Matthew W Conklin
- Department of Cell and Regenerative Biology, University of Wisconsin-Madison, Madison, WI, USA
| | - Agnes G Loeffler
- Department of Pathology, MetroHealth Medical Center, Cleveland, OH, USA
| | - Kevin W Eliceiri
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA.
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA.
- Morgridge Institute for Research, Madison, WI, USA.
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA.
| |
Collapse
|
13
|
Dubost F, Bruijne MD, Nardin M, Dalca AV, Donahue KL, Giese AK, Etherton MR, Wu O, Groot MD, Niessen W, Vernooij M, Rost NS, Schirmer MD. Multi-atlas image registration of clinical data with automated quality assessment using ventricle segmentation. Med Image Anal 2020; 63:101698. [PMID: 32339896 PMCID: PMC7275913 DOI: 10.1016/j.media.2020.101698] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 12/03/2019] [Accepted: 04/06/2020] [Indexed: 02/08/2023]
Abstract
Registration is a core component of many imaging pipelines. In case of clinical scans, with lower resolution and sometimes substantial motion artifacts, registration can produce poor results. Visual assessment of registration quality in large clinical datasets is inefficient. In this work, we propose to automatically assess the quality of registration to an atlas in clinical FLAIR MRI scans of the brain. The method consists of automatically segmenting the ventricles of a given scan using a neural network, and comparing the segmentation to the atlas ventricles propagated to image space. We used the proposed method to improve clinical image registration to a general atlas by computing multiple registrations - one directly to the general atlas and others via different age-specific atlases - and then selecting the registration that yielded the highest ventricle overlap. Finally, as an example application of the complete pipeline, a voxelwise map of white matter hyperintensity burden was computed using only the scans with registration quality above a predefined threshold. Methods were evaluated in a single-site dataset of more than 1000 scans, as well as a multi-center dataset comprising 142 clinical scans from 12 sites. The automated ventricle segmentation reached a Dice coefficient with manual annotations of 0.89 in the single-site dataset, and 0.83 in the multi-center dataset. Registration via age-specific atlases could improve ventricle overlap compared to a direct registration to the general atlas (Dice similarity coefficient increase up to 0.15). Experiments also showed that selecting scans with the registration quality assessment method could improve the quality of average maps of white matter hyperintensity burden, instead of using all scans for the computation of the white matter hyperintensity map. In this work, we demonstrated the utility of an automated tool for assessing image registration quality in clinical scans. This image quality assessment step could ultimately assist in the translation of automated neuroimaging pipelines to the clinic.
Collapse
Affiliation(s)
- Florian Dubost
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA; Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Marco Nardin
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Cambridge, USA
| | - Kathleen L Donahue
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Anne-Katrin Giese
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Mark R Etherton
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Ona Wu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Marius de Groot
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC - University Medical Center Rotterdam, the Netherlands
| | - Wiro Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Department of Imaging Physics, Faculty of Applied Science, TU Delft, Delft, The Netherlands
| | - Meike Vernooij
- Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, the Netherlands; Department of Epidemiology, Erasmus MC - University Medical Center Rotterdam, the Netherlands
| | - Natalia S Rost
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Markus D Schirmer
- J. Philip Kistler Stroke Research Center, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Cambridge, USA; Department of Population Health Sciences, German Centre for Neurodegenerative Diseases (DZNE), Germany.
| |
Collapse
|
14
|
McKenzie EM, Santhanam A, Ruan D, O'Connor D, Cao M, Sheng K. Multimodality image registration in the head-and-neck using a deep learning-derived synthetic CT as a bridge. Med Phys 2020; 47:1094-1104. [PMID: 31853975 PMCID: PMC7067662 DOI: 10.1002/mp.13976] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/28/2019] [Accepted: 12/10/2019] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To develop and demonstrate the efficacy of a novel head-and-neck multimodality image registration technique using deep-learning-based cross-modality synthesis. METHODS AND MATERIALS Twenty-five head-and-neck patients received magnetic resonance (MR) and computed tomography (CT) (CTaligned ) scans on the same day with the same immobilization. Fivefold cross validation was used with all of the MR-CT pairs to train a neural network to generate synthetic CTs from MR images. Twenty-four of 25 patients also had a separate CT without immobilization (CTnon-aligned ) and were used for testing. CTnon-aligned 's were deformed to the synthetic CT, and compared to CTnon-aligned registered to MR. The same registrations were performed from MR to CTnon-aligned and from synthetic CT to CTnon-aligned . All registrations used B-splines for modeling the deformation, and mutual information for the objective. Results were evaluated using the 95% Hausdorff distance among spinal cord contours, landmark error, inverse consistency, and Jacobian determinant of the estimated deformation fields. RESULTS When large initial rigid misalignment is present, registering CT to MRI-derived synthetic CT aligns the cord better than a direct registration. The average landmark error decreased from 9.8 ± 3.1 mm in MR→CTnon-aligned to 6.0 ± 2.1 mm in CTsynth →CTnon-aligned deformable registrations. In the CT to MR direction, the landmark error decreased from 10.0 ± 4.3 mm in CTnon-aligned →MR deformable registrations to 6.6 ± 2.0 mm in CTnon-aligned →CTsynth deformable registrations. The Jacobian determinant had an average value of 0.98. The proposed method also demonstrated improved inverse consistency over the direct method. CONCLUSIONS We showed that using a deep learning-derived synthetic CT in lieu of an MR for MR→CT and CT→MR deformable registration offers superior results to direct multimodal registration.
Collapse
Affiliation(s)
- Elizabeth M McKenzie
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Anand Santhanam
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Daniel O'Connor
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| |
Collapse
|
15
|
Schilling KG, Blaber J, Huo Y, Newton A, Hansen C, Nath V, Shafer AT, Williams O, Resnick SM, Rogers B, Anderson AW, Landman BA. Synthesized b0 for diffusion distortion correction (Synb0-DisCo). Magn Reson Imaging 2019; 64:62-70. [PMID: 31075422 DOI: 10.1016/j.mri.2019.05.008] [Citation(s) in RCA: 92] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 04/02/2019] [Accepted: 05/04/2019] [Indexed: 02/07/2023]
Abstract
Diffusion magnetic resonance images typically suffer from spatial distortions due to susceptibility induced off-resonance fields, which may affect the geometric fidelity of the reconstructed volume and cause mismatches with anatomical images. State-of-the art susceptibility correction (for example, FSL's TOPUP algorithm) typically requires data acquired twice with reverse phase encoding directions, referred to as blip-up blip-down acquisitions, in order to estimate an undistorted volume. Unfortunately, not all imaging protocols include a blip-up blip-down acquisition, and cannot take advantage of the state-of-the art susceptibility and motion correction capabilities. In this study, we aim to enable TOPUP-like processing with historical and/or limited diffusion imaging data that include only a structural image and single blip diffusion image. We utilize deep learning to synthesize an undistorted non-diffusion weighted image from the structural image, and use the non-distorted synthetic image as an anatomical target for distortion correction. We evaluate the efficacy of this approach (named Synb0-DisCo) and show that our distortion correction process results in better matching of the geometry of undistorted anatomical images, reduces variation in diffusion modeling, and is practically equivalent to having both blip-up and blip-down non-diffusion weighted images.
Collapse
Affiliation(s)
- Kurt G Schilling
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America.
| | - Justin Blaber
- Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| | - Yuankai Huo
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America
| | - Allen Newton
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America; Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, United States of America
| | - Colin Hansen
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America
| | - Vishwesh Nath
- Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| | - Andrea T Shafer
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Owen Williams
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Baxter Rogers
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America
| | - Adam W Anderson
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America
| | - Bennett A Landman
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America; Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America; Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| |
Collapse
|
16
|
Reinhold JC, Dewey BE, Carass A, Prince JL. Evaluating the Impact of Intensity Normalization on MR Image Synthesis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949. [PMID: 31551645 DOI: 10.1117/12.2513089] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Image synthesis learns a transformation from the intensity features of an input image to yield a different tissue contrast of the output image. This process has been shown to have application in many medical image analysis tasks including imputation, registration, and segmentation. To carry out synthesis, the intensities of the input images are typically scaled-i.e., normalized-both in training to learn the transformation and in testing when applying the transformation, but it is not presently known what type of input scaling is optimal. In this paper, we consider seven different intensity normalization algorithms and three different synthesis methods to evaluate the impact of normalization. Our experiments demonstrate that intensity normalization as a preprocessing step improves the synthesis results across all investigated synthesis algorithms. Furthermore, we show evidence that suggests intensity normalization is vital for successful deep learning-based MR image synthesis.
Collapse
Affiliation(s)
- Jacob C Reinhold
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - Blake E Dewey
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218.,F.M. Kirby Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA, 21205
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21218
| |
Collapse
|
17
|
Birgani F, Chegeni N, Birgani MT, Fatehi D, Akbarizadeh G, Tahmasbi M. Introduction of a simple algorithm to create synthetic-Computed tomography of the head from magnetic resonance imaging. JOURNAL OF MEDICAL SIGNALS & SENSORS 2019; 9:123-129. [PMID: 31316906 PMCID: PMC6601231 DOI: 10.4103/jmss.jmss_26_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background: Recently, magnetic resonance imaging (MRI)-based radiotherapy has become a favorite science field for treatment planning purposes. In this study, a simple algorithm was introduced to create synthetic computed tomography (sCT) of the head from MRI. Methods: A simple atlas-based method was proposed to create sCT images based on the paired T1/T2-weighted MRI and bone/brain window CT. Dataset included 10 patients with glioblastoma multiforme and 10 patients with other brain tumors. To generate a sCT image, first each MR from dataset was registered to the target-MR, the resulting transformation was applied to the corresponding CT to create the set of deformed CTs. Then, deformed-CTs were fused to generate a single sCT image. The sCT images were compared with the real CT images using geometric measures (mean absolute error [MAE] and dice similarity coefficient of bone [DSCbone]) and Hounsfield unit gamma-index (ГHU) with criteria 100 HU/2 mm. Results: The evaluations carried out by MAE, DSCbone, and ГHU showed a good agreement between the synthetic and real CT images. The results represented the range of 78–93 HU and 0.80–0.89 for MAE and DSCbone, respectively. The ГHU also showed that approximately 91%–93% of pixels fulfilled the criteria 100 HU/2 mm for brain tumors. Conclusion: This method showed that MR sequence (T1w or T2w) should be selected depending on the type of tumor. In addition, the brain window synthetic CTs are in better agreement with real CT relative to bone window sCT images.
Collapse
|
18
|
Image synthesis-based multi-modal image registration framework by using deep fully convolutional networks. Med Biol Eng Comput 2018; 57:1037-1048. [PMID: 30523534 DOI: 10.1007/s11517-018-1924-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Multi-modal image registration has significant meanings in clinical diagnosis, treatment planning, and image-guided surgery. Since different modalities exhibit different characteristics, finding a fast and accurate correspondence between images of different modalities is still a challenge. In this paper, we propose an image synthesis-based multi-modal registration framework. Image synthesis is performed by a ten-layer fully convolutional network (FCN). The network is composed of 10 convolutional layers combined with batch normalization (BN) and rectified linear unit (ReLU), which can be trained to learn an end-to-end mapping from one modality to the other. After the cross-modality image synthesis, multi-modal registration can be transformed into mono-modal registration. The mono-modal registration can be solved by methods with lower computational complexity, such as sum of squared differences (SSD). We tested our method in T1-weighted vs T2-weighted, T1-weighted vs PD, and T2-weighted vs PD image registrations with BrainWeb phantom data and IXI real patients' data. The result shows that our framework can achieve higher registration accuracy than the state-of-the-art multi-modal image registration methods, such as local mutual information (LMI) and α-mutual information (α-MI). The average registration errors of our method in experiment with IXI real patients' data were 1.19, 2.23, and 1.57 compared to 1.53, 2.60, and 2.36 of LMI and 1.34, 2.39, and 1.76 of α-MI in T2-weighted vs PD, T1-weighted vs PD, and T1-weighted vs T2-weighted image registration, respectively. In this paper, we propose an image synthesis-based multi-modal image registration framework. A deep FCN model is developed to perform image synthesis for this framework, which can capture the complex nonlinear relationship between different modalities and discover complex structural representations automatically by a large number of trainable mapping and parameters and perform accurate image synthesis. The framework combined with the deep FCN model and mono-modal registration methods (SSD) can achieve fast and robust results in multi-modal medical image registration. Graphical abstract The workflow of proposed multi-modal image registration framework.
Collapse
|
19
|
Iglesias JE, Modat M, Peter L, Stevens A, Annunziata R, Vercauteren T, Lein E, Fischl B, Ourselin S. Joint registration and synthesis using a probabilistic model for alignment of MRI and histological sections. Med Image Anal 2018; 50:127-144. [PMID: 30282061 PMCID: PMC6742511 DOI: 10.1016/j.media.2018.09.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 08/30/2018] [Accepted: 09/05/2018] [Indexed: 11/30/2022]
Abstract
Nonlinear registration of 2D histological sections with corresponding slices of MRI data is a critical step of 3D histology reconstruction algorithms. This registration is difficult due to the large differences in image contrast and resolution, as well as the complex nonrigid deformations and artefacts produced when sectioning the sample and mounting it on the glass slide. It has been shown in brain MRI registration that better spatial alignment across modalities can be obtained by synthesising one modality from the other and then using intra-modality registration metrics, rather than by using information theory based metrics to solve the problem directly. However, such an approach typically requires a database of aligned images from the two modalities, which is very difficult to obtain for histology and MRI. Here, we overcome this limitation with a probabilistic method that simultaneously solves for deformable registration and synthesis directly on the target images, without requiring any training data. The method is based on a probabilistic model in which the MRI slice is assumed to be a contrast-warped, spatially deformed version of the histological section. We use approximate Bayesian inference to iteratively refine the probabilistic estimate of the synthesis and the registration, while accounting for each other’s uncertainty. Moreover, manually placed landmarks can be seamlessly integrated in the framework for increased performance and robustness. Experiments on a synthetic dataset of MRI slices show that, compared with mutual information based registration, the proposed method makes it possible to use a much more flexible deformation model in the registration to improve its accuracy, without compromising robustness. Moreover, our framework also exploits information in manually placed landmarks more efficiently than mutual information: landmarks constrain the deformation field in both methods, but in our algorithm, it also has a positive effect on the synthesis – which further improves the registration. We also show results on two real, publicly available datasets: the Allen and BigBrain atlases. In both of them, the proposed method provides a clear improvement over mutual information based registration, both qualitatively (visual inspection) and quantitatively (registration error measured with pairs of manually annotated landmarks).
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK.
| | - Marc Modat
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK
| | - Loïc Peter
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Allison Stevens
- Martinos Center for Biomedical Imaging, Harvard Medical School and Massachusetts General Hospital, USA
| | - Roberto Annunziata
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK
| | - Tom Vercauteren
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Ed Lein
- Allen Institute for Brain Science, USA
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Harvard Medical School and Massachusetts General Hospital, USA; Computer Science and AI lab, Massachusetts Institute of Technology, USA
| | - Sebastien Ourselin
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | | |
Collapse
|
20
|
Xiang L, Wang Q, Nie D, Zhang L, Jin X, Qiao Y, Shen D. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Med Image Anal 2018; 47:31-44. [PMID: 29674235 PMCID: PMC6410565 DOI: 10.1016/j.media.2018.03.011] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 03/17/2018] [Accepted: 03/26/2018] [Indexed: 02/01/2023]
Abstract
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
Collapse
Affiliation(s)
- Lei Xiang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China.
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Xiyao Jin
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Yu Qiao
- Shenzhen Key Lab of Computer Vision & Pattern Recognition, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
21
|
Cao X, Yang J, Gao Y, Wang Q, Shen D. Region-adaptive Deformable Registration of CT/MRI Pelvic Images via Learning-based Image Synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:10.1109/TIP.2018.2820424. [PMID: 29994091 PMCID: PMC6165687 DOI: 10.1109/tip.2018.2820424] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Registration of pelvic CT and MRI is highly desired as it can facilitate effective fusion of two modalities for prostate cancer radiation therapy, i.e., using CT for dose planning and MRI for accurate organ delineation. However, due to the large inter-modality appearance gaps and the high shape/appearance variations of pelvic organs, the pelvic CT/MRI registration is highly challenging. In this paper, we propose a region-adaptive deformable registration method for multi-modal pelvic image registration. Specifically, to handle the large appearance gaps, we first perform both CT-to-MRI and MRI-to-CT image synthesis by multi-target regression forest (MT-RF). Then, to use the complementary anatomical information in the two modalities for steering the registration, we select key points automatically from both modalities and use them together for guiding correspondence detection in the region-adaptive fashion. That is, we mainly use CT to establish correspondences for bone regions, and use MRI to establish correspondences for soft tissue regions. The number of key points is increased gradually during the registration, to hierarchically guide the symmetric estimation of the deformation fields. Experiments for both intra-subject and inter-subject deformable registration show improved performances compared with state-of-the-art multi-modal registration methods, which demonstrate the potentials of our method to be applied for the routine prostate cancer radiation therapy.
Collapse
|
22
|
Bermudez C, Plassard AJ, Davis TL, Newton AT, Resnick SM, Landman BA. Learning Implicit Brain MRI Manifolds with Deep Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741L. [PMID: 29887659 PMCID: PMC5990281 DOI: 10.1117/12.2293515] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.
Collapse
Affiliation(s)
- Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Andrew J Plassard
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Taylor L Davis
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Allen T Newton
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Susan M Resnick
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Bennett A Landman
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| |
Collapse
|
23
|
Bowles C, Qin C, Guerrero R, Gunn R, Hammers A, Dickie DA, Valdés Hernández M, Wardlaw J, Rueckert D. Brain lesion segmentation through image synthesis and outlier detection. Neuroimage Clin 2017; 16:643-658. [PMID: 29868438 PMCID: PMC5984574 DOI: 10.1016/j.nicl.2017.09.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 08/30/2017] [Accepted: 09/04/2017] [Indexed: 11/02/2022]
Abstract
Cerebral small vessel disease (SVD) can manifest in a number of ways. Many of these result in hyperintense regions visible on T2-weighted magnetic resonance (MR) images. The automatic segmentation of these lesions has been the focus of many studies. However, previous methods tended to be limited to certain types of pathology, as a consequence of either restricting the search to the white matter, or by training on an individual pathology. Here we present an unsupervised abnormality detection method which is able to detect abnormally hyperintense regions on FLAIR regardless of the underlying pathology or location. The method uses a combination of image synthesis, Gaussian mixture models and one class support vector machines, and needs only be trained on healthy tissue. We evaluate our method by comparing segmentation results from 127 subjects with SVD with three established methods and report significantly superior performance across a number of metrics.
Collapse
Affiliation(s)
| | - Chen Qin
- Department of Computing, Imperial College London, UK
| | | | - Roger Gunn
- Imanova Ltd., London, UK
- Department of Medicine, Imperial College London, UK
| | - Alexander Hammers
- Department of Computing, Imperial College London, UK
- King's College London & Guy's and St Thomas' PET Centre, Division of Imaging Sciences and Biomedical Engineering, St Thomas' Hospital, King's College London, UK
| | | | | | - Joanna Wardlaw
- Department of Neuroimaging Sciences, University of Edinburgh, UK
| | | |
Collapse
|
24
|
Burgos N, Guerreiro F, McClelland J, Presles B, Modat M, Nill S, Dearnaley D, deSouza N, Oelfke U, Knopf AC, Ourselin S, Jorge Cardoso M. Iterative framework for the joint segmentation and CT synthesis of MR images: application to MRI-only radiotherapy treatment planning. Phys Med Biol 2017; 62:4237-4253. [PMID: 28291745 PMCID: PMC5423555 DOI: 10.1088/1361-6560/aa66bf] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2016] [Revised: 03/10/2017] [Accepted: 03/14/2017] [Indexed: 11/11/2022]
Abstract
To tackle the problem of magnetic resonance imaging (MRI)-only radiotherapy treatment planning (RTP), we propose a multi-atlas information propagation scheme that jointly segments organs and generates pseudo x-ray computed tomography (CT) data from structural MR images (T1-weighted and T2-weighted). As the performance of the method strongly depends on the quality of the atlas database composed of multiple sets of aligned MR, CT and segmented images, we also propose a robust way of registering atlas MR and CT images, which combines structure-guided registration, and CT and MR image synthesis. We first evaluated the proposed framework in terms of segmentation and CT synthesis accuracy on 15 subjects with prostate cancer. The segmentations obtained with the proposed method were compared using the Dice score coefficient (DSC) to the manual segmentations. Mean DSCs of 0.73, 0.90, 0.77 and 0.90 were obtained for the prostate, bladder, rectum and femur heads, respectively. The mean absolute error (MAE) and the mean error (ME) were computed between the reference CTs (non-rigidly aligned to the MRs) and the pseudo CTs generated with the proposed method. The MAE was on average [Formula: see text] HU and the ME [Formula: see text] HU. We then performed a dosimetric evaluation by re-calculating plans on the pseudo CTs and comparing them to the plans optimised on the reference CTs. We compared the cumulative dose volume histograms (DVH) obtained for the pseudo CTs to the DVH obtained for the reference CTs in the planning target volume (PTV) located in the prostate, and in the organs at risk at different DVH points. We obtained average differences of [Formula: see text] in the PTV for [Formula: see text], and between [Formula: see text] and 0.05% in the PTV, bladder, rectum and femur heads for D mean and [Formula: see text]. Overall, we demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, potentially bypassing the need for CT scan for accurate RTP.
Collapse
Affiliation(s)
- Ninon Burgos
- Translational Imaging Group, CMIC, University College London, London, United Kingdom
| | - Filipa Guerreiro
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, Netherlands
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, United Kingdom
| | - Benoît Presles
- Translational Imaging Group, CMIC, University College London, London, United Kingdom
| | - Marc Modat
- Translational Imaging Group, CMIC, University College London, London, United Kingdom
- Dementia Research Centre, Institute of Neurology, UCL, London, United Kingdom
| | - Simeon Nill
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust (ICR/RMH), London, United Kingdom
| | | | - Nandita deSouza
- CRUK Centre for Cancer Imaging, ICR/RMH, Sutton, United Kingdom
| | - Uwe Oelfke
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust (ICR/RMH), London, United Kingdom
| | - Antje-Christin Knopf
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, Netherlands
| | - Sébastien Ourselin
- Translational Imaging Group, CMIC, University College London, London, United Kingdom
- Dementia Research Centre, Institute of Neurology, UCL, London, United Kingdom
| | - M Jorge Cardoso
- Translational Imaging Group, CMIC, University College London, London, United Kingdom
- Dementia Research Centre, Institute of Neurology, UCL, London, United Kingdom
| |
Collapse
|
25
|
Lee J, Carass A, Jog A, Zhao C, Prince JL. Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 29142336 DOI: 10.1117/12.2254571] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multi-atlas-based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.
Collapse
Affiliation(s)
- Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Amod Jog
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Can Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
26
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
27
|
Roy S, Butman JA, Pham DL. Robust skull stripping using multiple MR image contrasts insensitive to pathology. Neuroimage 2017; 146:132-147. [PMID: 27864083 PMCID: PMC5321800 DOI: 10.1016/j.neuroimage.2016.11.017] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 10/31/2016] [Accepted: 11/04/2016] [Indexed: 01/18/2023] Open
Abstract
Automatic skull-stripping or brain extraction of magnetic resonance (MR) images is often a fundamental step in many neuroimage processing pipelines. The accuracy of subsequent image processing relies on the accuracy of the skull-stripping. Although many automated stripping methods have been proposed in the past, it is still an active area of research particularly in the context of brain pathology. Most stripping methods are validated on T1-w MR images of normal brains, especially because high resolution T1-w sequences are widely acquired and ground truth manual brain mask segmentations are publicly available for normal brains. However, different MR acquisition protocols can provide complementary information about the brain tissues, which can be exploited for better distinction between brain, cerebrospinal fluid, and unwanted tissues such as skull, dura, marrow, or fat. This is especially true in the presence of pathology, where hemorrhages or other types of lesions can have similar intensities as skull in a T1-w image. In this paper, we propose a sparse patch based Multi-cONtrast brain STRipping method (MONSTR),2 where non-local patch information from one or more atlases, which contain multiple MR sequences and reference delineations of brain masks, are combined to generate a target brain mask. We compared MONSTR with four state-of-the-art, publicly available methods: BEaST, SPECTRE, ROBEX, and OptiBET. We evaluated the performance of these methods on 6 datasets consisting of both healthy subjects and patients with various pathologies. Three datasets (ADNI, MRBrainS, NAMIC) are publicly available, consisting of 44 healthy volunteers and 10 patients with schizophrenia. Other three in-house datasets, comprising 87 subjects in total, consisted of patients with mild to severe traumatic brain injury, brain tumors, and various movement disorders. A combination of T1-w, T2-w were used to skull-strip these datasets. We show significant improvement in stripping over the competing methods on both healthy and pathological brains. We also show that our multi-contrast framework is robust and maintains accurate performance across different types of acquisitions and scanners, even when using normal brains as atlases to strip pathological brains, demonstrating that our algorithm is applicable even when reference segmentations of pathological brains are not available to be used as atlases.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States.
| | - John A Butman
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States; Diagnostic Radiology Department, National Institute of Health, United States
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States
| |
Collapse
|
28
|
Roy S, Butman JA, Pham DL. Synthesizing CT from Ultrashort Echo-Time MR Images via Convolutional Neural Networks. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING 2017. [DOI: 10.1007/978-3-319-68127-6_3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
29
|
Yang X, Han X, Park E, Aylward S, Kwitt R, Niethammer M. Registration of Pathological Images. ACTA ACUST UNITED AC 2016; 9968:97-107. [PMID: 29896582 DOI: 10.1007/978-3-319-46630-9_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
This paper proposes an approach to improve atlas-to-image registration accuracy with large pathologies. Instead of directly registering an atlas to a pathological image, the method learns a mapping from the pathological image to a quasi-normal image, for which more accurate registration is possible. Specifically, the method uses a deep variational convolutional encoder-decoder network to learn the mapping. Furthermore, the method estimates local mapping uncertainty through network inference statistics and uses those estimates to down-weight the image registration similarity measure in areas of high uncertainty. The performance of the method is quantified using synthetic brain tumor images and images from the brain tumor segmentation challenge (BRATS 2015).
Collapse
Affiliation(s)
| | - Xu Han
- UNC Chapel Hill, Chapel Hill, USA
| | | | | | - Roland Kwitt
- Department of Computer Science, University of Salzburg, Austria
| | - Marc Niethammer
- UNC Chapel Hill, Chapel Hill, USA
- Biomedical Research Imaging Center, Chapel Hill, USA
| |
Collapse
|
30
|
Patch Based Synthesis of Whole Head MR Images: Application to EPI Distortion Correction. ACTA ACUST UNITED AC 2016; 9968:146-156. [PMID: 28367541 DOI: 10.1007/978-3-319-46630-9_15] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
Different magnetic resonance imaging pulse sequences are used to generate image contrasts based on physical properties of tissues, which provide different and often complementary information about them. Therefore multiple image contrasts are useful for multimodal analysis of medical images. Often, medical image processing algorithms are optimized for particular image contrasts. If a desirable contrast is unavailable, contrast synthesis (or modality synthesis) methods try to "synthesize" the unavailable constrasts from the available ones. Most of the recent image synthesis methods generate synthetic brain images, while whole head magnetic resonance (MR) images can also be useful for many applications. We propose an atlas based patch matching algorithm to synthesize T2-w whole head (including brain, skull, eyes etc) images from T1-w images for the purpose of distortion correction of diffusion weighted MR images. The geometric distortion in diffusion MR images due to in-homogeneous B0 magnetic field are often corrected by non-linearly registering the corresponding b = 0 image with zero diffusion gradient to an undistorted T2-w image. We show that our synthetic T2-w images can be used as a template in absence of a real T2-w image. Our patch based method requires multiple atlases with T1 and T2 to be registeLowRes to a given target T1. Then for every patch on the target, multiple similar looking matching patches are found on the atlas T1 images and corresponding patches on the atlas T2 images are combined to generate a synthetic T2 of the target. We experimented on image data obtained from 44 patients with traumatic brain injury (TBI), and showed that our synthesized T2 images produce more accurate distortion correction than a state-of-the-art registration based image synthesis method.
Collapse
|
31
|
Huynh T, Gao Y, Kang J, Wang L, Zhang P, Lian J, Shen D. Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:174-83. [PMID: 26241970 PMCID: PMC4703527 DOI: 10.1109/tmi.2015.2461533] [Citation(s) in RCA: 154] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
Collapse
Affiliation(s)
- Tri Huynh
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Computer Science, and also with the IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jiayin Kang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Li Wang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Pei Zhang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-071, Korea
| |
Collapse
|
32
|
Roy S, Wang WT, Carass A, Prince JL, Butman JA, Pham DL. PET attenuation correction using synthetic CT from ultrashort echo-time MR imaging. J Nucl Med 2014; 55:2071-7. [PMID: 25413135 DOI: 10.2967/jnumed.114.143958] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
UNLABELLED Integrated PET/MR systems are becoming increasingly popular in clinical and research applications. Quantitative PET reconstruction requires correction for γ-photon attenuations using an attenuation coefficient map (μ map) that is a measure of the electron density. One challenge of PET/MR, in contrast to PET/CT, lies in the accurate computation of μ maps. Unlike CT, MR imaging measures physical properties not directly related to electron density. Previous approaches have computed the attenuation coefficients using a segmentation of MR images or using deformable registration of atlas CT images to the space of the subject MR images. METHODS In this work, we propose a patch-based method to generate whole-head μ maps from ultrashort echo-time (UTE) MR imaging sequences. UTE images are preferred to other MR sequences because of the increased signal from bone. To generate a synthetic CT image, we use patches from a reference dataset, which consists of dual-echo UTE images and a coregistered CT scan from the same subject. Matching of patches between the reference and target images allows corresponding patches from the reference CT scan to be combined via a Bayesian framework. No registration or segmentation is required. RESULTS For evaluation, UTE, CT, and PET data acquired from 5 patients under an institutional review board-approved protocol were used. Another patient (with UTE and CT data only) was selected to be the reference to generate synthetic CT images for these 5 patients. PET reconstructions were attenuation-corrected using the original CT, our synthetic CT, Siemens Dixon-based μ maps, Siemens UTE-based μ maps, and deformable registration-based CT. Our synthetic CT-based PET reconstruction showed higher correlation (average ρ = 0.996, R(2) = 0.991) to the original CT-based PET, as compared with the segmentation- and registration-based methods. Synthetic CT-based reconstruction had minimal bias (regression slope, 0.990), as compared with the segmentation-based methods (regression slope, 0.905). A peak signal-to-noise ratio of 35.98 dB in the reconstructed PET activity was observed, compared with 29.767, 29.34, and 27.43 dB for the Siemens Dixon-, UTE-, and registration-based μ maps. CONCLUSION A patch-matching approach to synthesize CT images from dual-echo UTE images leads to significantly improved accuracy of PET reconstruction as compared with actual CT scans. The PET reconstruction is improved over segmentation- (Dixon and Siemens UTE) and registration-based methods, even in subjects with pathologic findings.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation, Bethesda, Maryland
| | - Wen-Tung Wang
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation, Bethesda, Maryland
| | - Aaron Carass
- Image Analysis and Communications Laboratory, Johns Hopkins University, Baltimore, Maryland; and
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, Johns Hopkins University, Baltimore, Maryland; and
| | - John A Butman
- Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Maryland
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation, Bethesda, Maryland
| |
Collapse
|