1
|
Xu C, Li J, Wang Y, Wang L, Wang Y, Zhang X, Liu W, Chen J, Vatian A, Gusarova N, Ye C, Zheng Z. SiMix: A domain generalization method for cross-site brain MRI harmonization via site mixing. Neuroimage 2024; 299:120812. [PMID: 39197559 DOI: 10.1016/j.neuroimage.2024.120812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 08/20/2024] [Accepted: 08/22/2024] [Indexed: 09/01/2024] Open
Abstract
Brain magnetic resonance imaging (MRI) is widely used in clinical practice for disease diagnosis. However, MRI scans acquired at different sites can have different appearances due to the difference in the hardware, pulse sequence, and imaging parameter. It is important to reduce or eliminate such cross-site variations with brain MRI harmonization so that downstream image processing and analysis is performed consistently. Previous works on the harmonization problem require the data acquired from the sites of interest for model training. But in real-world scenarios there can be test data from a new site of interest after the model is trained, and training data from the new site is unavailable when the model is trained. In this case, previous methods cannot optimally handle the test data from the new unseen site. To address the problem, in this work we explore domain generalization for brain MRI harmonization and propose Site Mix (SiMix). We assume that images of travelling subjects are acquired at a few existing sites for model training. To allow the training data to better represent the test data from unseen sites, we first propose to mix the training images belonging to different sites stochastically, which substantially increases the diversity of the training data while preserving the authenticity of the mixed training images. Second, at test time, when a test image from an unseen site is given, we propose a multiview strategy that perturbs the test image with preserved authenticity and ensembles the harmonization results of the perturbed images for improved harmonization quality. To validate SiMix, we performed experiments on the publicly available SRPBS dataset and MUSHAC dataset that comprised brain MRI acquired at nine and two different sites, respectively. The results indicate that SiMix improves brain MRI harmonization for unseen sites, and it is also beneficial to the harmonization of existing sites.
Collapse
Affiliation(s)
- Chundan Xu
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Jie Li
- Department of Radiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Yakui Wang
- Department of Radiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Lixue Wang
- Department of Radiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Yizhe Wang
- Department of Radiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Xiaofeng Zhang
- School of Information and Electronics, Beijing Institute of Technology, Zhuhai, China
| | - Weiqi Liu
- Sophmind Technology (Beijing) Co., Ltd., Beijing, China
| | - Jingang Chen
- Sophmind Technology (Beijing) Co., Ltd., Beijing, China
| | - Aleksandra Vatian
- Faculty of Infocommunicational Technologies, ITMO University, St. Petersburg, Russia
| | - Natalia Gusarova
- Faculty of Infocommunicational Technologies, ITMO University, St. Petersburg, Russia
| | - Chuyang Ye
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China.
| | - Zhuozhao Zheng
- Department of Radiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Lu X, Liang X, Liu W, Miao X, Guan X. ReeGAN: MRI image edge-preserving synthesis based on GANs trained with misaligned data. Med Biol Eng Comput 2024; 62:1851-1868. [PMID: 38396277 DOI: 10.1007/s11517-024-03035-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 01/27/2024] [Indexed: 02/25/2024]
Abstract
As a crucial medical examination technique, different modalities of magnetic resonance imaging (MRI) complement each other, offering multi-angle and multi-dimensional insights into the body's internal information. Therefore, research on MRI cross-modality conversion is of great significance, and many innovative techniques have been explored. However, most methods are trained on well-aligned data, and the impact of misaligned data has not received sufficient attention. Additionally, many methods focus on transforming the entire image and ignore crucial edge information. To address these challenges, we propose a generative adversarial network based on multi-feature fusion, which effectively preserves edge information while training on noisy data. Notably, we consider images with limited range random transformations as noisy labels and use an additional small auxiliary registration network to help the generator adapt to the noise distribution. Moreover, we inject auxiliary edge information to improve the quality of synthesized target modality images. Our goal is to find the best solution for cross-modality conversion. Comprehensive experiments and ablation studies demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xiangjiang Lu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China.
| | - Xiaoshuang Liang
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Wenjing Liu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xiuxia Miao
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xianglong Guan
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| |
Collapse
|
3
|
Lin H, Figini M, D'Arco F, Ogbole G, Tanno R, Blumberg SB, Ronan L, Brown BJ, Carmichael DW, Lagunju I, Cross JH, Fernandez-Reyes D, Alexander DC. Low-field magnetic resonance image enhancement via stochastic image quality transfer. Med Image Anal 2023; 87:102807. [PMID: 37120992 DOI: 10.1016/j.media.2023.102807] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 01/18/2023] [Accepted: 03/30/2023] [Indexed: 05/02/2023]
Abstract
Low-field (<1T) magnetic resonance imaging (MRI) scanners remain in widespread use in low- and middle-income countries (LMICs) and are commonly used for some applications in higher income countries e.g. for small child patients with obesity, claustrophobia, implants, or tattoos. However, low-field MR images commonly have lower resolution and poorer contrast than images from high field (1.5T, 3T, and above). Here, we present Image Quality Transfer (IQT) to enhance low-field structural MRI by estimating from a low-field image the image we would have obtained from the same subject at high field. Our approach uses (i) a stochastic low-field image simulator as the forward model to capture uncertainty and variation in the contrast of low-field images corresponding to a particular high-field image, and (ii) an anisotropic U-Net variant specifically designed for the IQT inverse problem. We evaluate the proposed algorithm both in simulation and using multi-contrast (T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR)) clinical low-field MRI data from an LMIC hospital. We show the efficacy of IQT in improving contrast and resolution of low-field MR images. We demonstrate that IQT-enhanced images have potential for enhancing visualisation of anatomical structures and pathological lesions of clinical relevance from the perspective of radiologists. IQT is proved to have capability of boosting the diagnostic value of low-field MRI, especially in low-resource settings.
Collapse
Affiliation(s)
- Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, Zhejiang, China; Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom.
| | - Matteo Figini
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| | - Felice D'Arco
- Department of Radiology, Great Ormond Street Hospital for Children, London WC1N 3JH, United Kingdom
| | - Godwin Ogbole
- Department of Radiology, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | | | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom; Centre for Artificial Intelligence, University College London, London WC1E 6BT, United Kingdom
| | - Lisa Ronan
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| | - Biobele J Brown
- Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - David W Carmichael
- School of Biomedical Engineering & Imaging Sciences, King's College London, London NW3 3ES, United Kingdom; UCL Great Ormond Street Institute of Child Health, London WC1N 3JH, United Kingdom
| | - Ikeoluwa Lagunju
- Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - Judith Helen Cross
- UCL Great Ormond Street Institute of Child Health, London WC1N 3JH, United Kingdom
| | - Delmiro Fernandez-Reyes
- Department of Computer Science, University College London, London WC1E 6BT, United Kingdom; Department of Paediatrics, College of Medicine, University of Ibadan, Ibadan 200284, Nigeria
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| |
Collapse
|
4
|
Kawahara D, Yoshimura H, Matsuura T, Saito A, Nagata Y. MRI image synthesis for fluid-attenuated inversion recovery and diffusion-weighted images with deep learning. Phys Eng Sci Med 2023; 46:313-323. [PMID: 36715853 DOI: 10.1007/s13246-023-01220-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 01/10/2023] [Indexed: 01/31/2023]
Abstract
This study aims to synthesize fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted images (DWI) with a deep conditional adversarial network from T1- and T2-weighted magnetic resonance imaging (MRI) images. A total of 1980 images of 102 patients were split into two datasets: 1470 (68 patients) in a training set and 510 (34 patients) in a test set. The prediction framework was based on a convolutional neural network with a generator and discriminator. T1-weighted, T2-weighted, and composite images were used as inputs. The digital imaging and communications in medicine (DICOM) images were converted to 8-bit red-green-blue images. The red and blue channels of the composite images were assigned to 8-bit grayscale pixel values in T1-weighted images, and the green channel was assigned to those in T2-weighted images. The prediction FLAIR and DWI images were of the same objects as the inputs. For the results, the prediction model with composite MRI input images in the DWI image showed the smallest relative mean absolute error (rMAE) and largest mutual information (MI), and that in the FLAIR image showed the largest relative mean-square error (rMSE), relative root-mean-square error (rRMSE), and peak signal-to-noise ratio (PSNR). For the FLAIR image, the prediction model with the T2-weighted MRI input images generated more accurate synthesis results than that with the T1-weighted inputs. The proposed image synthesis framework can improve the versatility and quality of multi-contrast MRI without extra scans. The composite input MRI image contributes to synthesizing the multi-contrast MRI image efficiently.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Hisanori Yoshimura
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Department of Radiology, National Hospital Organization Kure Medical Center, Hiroshima, 737-0023, Japan
| | - Takaaki Matsuura
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
5
|
Zhang X, He X, Guo J, Ettehadi N, Aw N, Semanek D, Posner J, Laine A, Wang Y. PTNet3D: A 3D High-Resolution Longitudinal Infant Brain MRI Synthesizer Based on Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2925-2940. [PMID: 35560070 PMCID: PMC9529847 DOI: 10.1109/tmi.2022.3174827] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
An increased interest in longitudinal neurodevelopment during the first few years after birth has emerged in recent years. Noninvasive magnetic resonance imaging (MRI) can provide crucial information about the development of brain structures in the early months of life. Despite the success of MRI collections and analysis for adults, it remains a challenge for researchers to collect high-quality multimodal MRIs from developing infant brains because of their irregular sleep pattern, limited attention, inability to follow instructions to stay still during scanning. In addition, there are limited analytic approaches available. These challenges often lead to a significant reduction of usable MRI scans and pose a problem for modeling neurodevelopmental trajectories. Researchers have explored solving this problem by synthesizing realistic MRIs to replace corrupted ones. Among synthesis methods, the convolutional neural network-based (CNN-based) generative adversarial networks (GANs) have demonstrated promising performance. In this study, we introduced a novel 3D MRI synthesis framework- pyramid transformer network (PTNet3D)- which relies on attention mechanisms through transformer and performer layers. We conducted extensive experiments on high-resolution Developing Human Connectome Project (dHCP) and longitudinal Baby Connectome Project (BCP) datasets. Compared with CNN-based GANs, PTNet3D consistently shows superior synthesis accuracy and superior generalization on two independent, large-scale infant brain MRI datasets. Notably, we demonstrate that PTNet3D synthesized more realistic scans than CNN-based models when the input is from multi-age subjects. Potential applications of PTNet3D include synthesizing corrupted or missing images. By replacing corrupted scans with synthesized ones, we observed significant improvement in infant whole brain segmentation.
Collapse
|
6
|
Zhan B, Zhou L, Li Z, Wu X, Pu Y, Zhou J, Wang Y, Shen D. D2FE-GAN: Decoupled dual feature extraction based GAN for MRI image synthesis. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
7
|
Bi-MGAN: Bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
8
|
Malakar S, Roy SD, Das S, Sen S, Velásquez JD, Sarkar R. Computer Based Diagnosis of Some Chronic Diseases: A Medical Journey of the Last Two Decades. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 29:5525-5567. [PMID: 35729963 PMCID: PMC9199478 DOI: 10.1007/s11831-022-09776-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/22/2022] [Indexed: 06/15/2023]
Abstract
Disease prediction from diagnostic reports and pathological images using artificial intelligence (AI) and machine learning (ML) is one of the fastest emerging applications in recent days. Researchers are striving to achieve near-perfect results using advanced hardware technologies in amalgamation with AI and ML based approaches. As a result, a large number of AI and ML based methods are found in the literature. A systematic survey describing the state-of-the-art disease prediction methods, specifically chronic disease prediction algorithms, will provide a clear idea about the recent models developed in this field. This will also help the researchers to identify the research gaps present there. To this end, this paper looks over the approaches in the literature designed for predicting chronic diseases like Breast Cancer, Lung Cancer, Leukemia, Heart Disease, Diabetes, Chronic Kidney Disease and Liver Disease. The advantages and disadvantages of various techniques are thoroughly explained. This paper also presents a detailed performance comparison of different methods. Finally, it concludes the survey by highlighting some future research directions in this field that can be addressed through the forthcoming research attempts.
Collapse
Affiliation(s)
- Samir Malakar
- Department of Computer Science, Asutosh College, Kolkata, India
| | - Soumya Deep Roy
- Department of Metallurgical and Material Engineering, Jadavpur University, Kolkata, India
| | - Soham Das
- Department of Metallurgical and Material Engineering, Jadavpur University, Kolkata, India
| | - Swaraj Sen
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Juan D. Velásquez
- Departament of Industrial Engineering, University of Chile, Santiago, Chile
- Instituto Sistemas Complejos de Ingeniería (ISCI), Santiago, Chile
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|
9
|
Nan Y, Ser JD, Walsh S, Schönlieb C, Roberts M, Selby I, Howard K, Owen J, Neville J, Guiot J, Ernst B, Pastor A, Alberich-Bayarri A, Menzel MI, Walsh S, Vos W, Flerin N, Charbonnier JP, van Rikxoort E, Chatterjee A, Woodruff H, Lambin P, Cerdá-Alberich L, Martí-Bonmatí L, Herrera F, Yang G. Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 82:99-122. [PMID: 35664012 PMCID: PMC8878813 DOI: 10.1016/j.inffus.2022.01.001] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 12/22/2021] [Accepted: 01/07/2022] [Indexed: 05/13/2023]
Abstract
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research.
Collapse
Affiliation(s)
- Yang Nan
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
| | - Javier Del Ser
- Department of Communications Engineering, University of the Basque Country UPV/EHU, Bilbao 48013, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), Derio 48160, Spain
| | - Simon Walsh
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
| | - Carola Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, Northern Ireland UK
| | - Michael Roberts
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, Northern Ireland UK
- Oncology R&D, AstraZeneca, Cambridge, Northern Ireland UK
| | - Ian Selby
- Department of Radiology, University of Cambridge, Cambridge, Northern Ireland UK
| | - Kit Howard
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - John Owen
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - Jon Neville
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - Julien Guiot
- University Hospital of Liège (CHU Liège), Respiratory medicine department, Liège, Belgium
- University of Liege, Department of clinical sciences, Pneumology-Allergology, Liège, Belgium
| | - Benoit Ernst
- University Hospital of Liège (CHU Liège), Respiratory medicine department, Liège, Belgium
- University of Liege, Department of clinical sciences, Pneumology-Allergology, Liège, Belgium
| | | | | | - Marion I. Menzel
- Technische Hochschule Ingolstadt, Ingolstadt, Germany
- GE Healthcare GmbH, Munich, Germany
| | - Sean Walsh
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | - Wim Vos
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | - Nina Flerin
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | | | | | - Avishek Chatterjee
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Henry Woodruff
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Leonor Cerdá-Alberich
- Medical Imaging Department, Hospital Universitari i Politècnic La Fe, Valencia, Spain
| | - Luis Martí-Bonmatí
- Medical Imaging Department, Hospital Universitari i Politècnic La Fe, Valencia, Spain
| | - Francisco Herrera
- Department of Computer Sciences and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada, Granada, Spain
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, Northern Ireland UK
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, Northern Ireland UK
| |
Collapse
|
10
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
11
|
Zhuang J, Wang D. Geometrically Matched Multi-source Microscopic Image Synthesis Using Bidirectional Adversarial Networks. LECTURE NOTES IN ELECTRICAL ENGINEERING 2022:79-88. [DOI: 10.1007/978-981-16-3880-0_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
12
|
Zuo L, Dewey BE, Liu Y, He Y, Newsome SD, Mowry EM, Resnick SM, Prince JL, Carass A. Unsupervised MR harmonization by learning disentangled representations using information bottleneck theory. Neuroimage 2021; 243:118569. [PMID: 34506916 PMCID: PMC10473284 DOI: 10.1016/j.neuroimage.2021.118569] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 08/11/2021] [Accepted: 09/07/2021] [Indexed: 01/19/2023] Open
Abstract
In magnetic resonance (MR) imaging, a lack of standardization in acquisition often causes pulse sequence-based contrast variations in MR images from site to site, which impedes consistent measurements in automatic analyses. In this paper, we propose an unsupervised MR image harmonization approach, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), which aims to alleviate contrast variations in multi-site MR imaging. Designed using information bottleneck theory, CALAMITI learns a globally disentangled latent space containing both anatomical and contrast information, which permits harmonization. In contrast to supervised harmonization methods, our approach does not need a sample population to be imaged across sites. Unlike traditional unsupervised harmonization approaches which often suffer from geometry shifts, CALAMITI better preserves anatomy by design. The proposed method is also able to adapt to a new testing site with a straightforward fine-tuning process. Experiments on MR images acquired from ten sites show that CALAMITI achieves superior performance compared with other harmonization approaches.
Collapse
Affiliation(s)
- Lianrui Zuo
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA; Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institute of Health, Baltimore, MD 20892, USA.
| | - Blake E Dewey
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Yihao Liu
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Yufan He
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Scott D Newsome
- Department of Neurology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Ellen M Mowry
- Department of Neurology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institute of Health, Baltimore, MD 20892, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
13
|
Kim S, Jang H, Hong S, Hong YS, Bae WC, Kim S, Hwang D. Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization. Med Image Anal 2021; 73:102198. [PMID: 34403931 DOI: 10.1016/j.media.2021.102198] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 07/18/2021] [Accepted: 07/23/2021] [Indexed: 11/28/2022]
Abstract
Obtaining multiple series of magnetic resonance (MR) images with different contrasts is useful for accurate diagnosis of human spinal conditions. However, this can be time consuming and a burden on both the patient and the hospital. We propose a Bloch equation-based autoencoder regularization generative adversarial network (BlochGAN) to generate a fat saturation T2-weighted (T2 FS) image from T1-weighted (T1-w) and T2-weighted (T2-w) images of human spine. To achieve this, our approach was to utilize the relationship between the contrasts using Bloch equation since it is a fundamental principle of MR physics and serves as a physical basis of each contrasts. BlochGAN properly generated the target-contrast images using the autoencoder regularization based on the Bloch equation to identify the physical basis of the contrasts. BlochGAN consists of four sub-networks: an encoder, a decoder, a generator, and a discriminator. The encoder extracts features from the multi-contrast input images, and the generator creates target T2 FS images using the features extracted from the encoder. The discriminator assists network learning by providing adversarial loss, and the decoder reconstructs the input multi-contrast images and regularizes the learning process by providing reconstruction loss. The discriminator and the decoder are only used in the training process. Our results demonstrate that BlochGAN achieved quantitatively and qualitatively superior performance compared to conventional medical image synthesis methods in generating spine T2 FS images from T1-w, and T2-w images.
Collapse
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Seokjun Hong
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Yeong Sang Hong
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea
| | - Won C Bae
- Department of Radiology, Veterans Affairs San Diego Healthcare System, 3350 La Jolla Village Drive, San Diego, CA 92161-0114, USA; Department of Radiology, University of California-San Diego, La Jolla, CA 92093-0997, USA
| | - Sungjun Kim
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea.
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea.
| |
Collapse
|
14
|
Zhan B, Li D, Wu X, Zhou J, Wang Y. Multi-modal MRI Image Synthesis via GAN with Multi-scale Gate Mergence. IEEE J Biomed Health Inform 2021; 26:17-26. [PMID: 34125692 DOI: 10.1109/jbhi.2021.3088866] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-modal magnetic resonance imaging (MRI) plays a critical role in clinical diagnosis and treatment nowadays. Each modality of MRI presents its own specific anatomical features which serve as complementary information to other modalities and can provide rich diagnostic information. However, due to the limitations of time consuming and expensive cost, some image sequences of patients may be lost or corrupted, posing an obstacle for accurate diagnosis. Although current multi-modal image synthesis approaches are able to alleviate the issues to some extent, they are still far short of fusing modalities effectively. In light of this, we propose a multi-scale gate mergence based generative adversarial network model, namely MGM-GAN, to synthesize one modality of MRI from others. Notably, we have multiple down-sampling branches corresponding to input modalities to specifically extract their unique features. In contrast to the generic multi-modal fusion approach of averaging or maximizing operations, we introduce a gate mergence (GM) mechanism to automatically learn the weights of different modalities across locations, enhancing the task-related information while suppressing the irrelative information. As such, the feature maps of all the input modalities at each down-sampling level, i.e., multi-scale levels, are integrated via GM module. In addition, both the adversarial loss and the pixel-wise loss, as well as gradient difference loss (GDL) are applied to train the network to produce the desired modality accurately. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art multi-modal image synthesis methods.
Collapse
|
15
|
Fei Y, Zhan B, Hong M, Wu X, Zhou J, Wang Y. Deep learning-based multi-modal computing with feature disentanglement for MRI image synthesis. Med Phys 2021; 48:3778-3789. [PMID: 33959965 DOI: 10.1002/mp.14929] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/06/2021] [Accepted: 04/18/2021] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Different Magnetic resonance imaging (MRI) modalities of the same anatomical structure are required to present different pathological information from the physical level for diagnostic needs. However, it is often difficult to obtain full-sequence MRI images of patients owing to limitations such as time consumption and high cost. The purpose of this work is to develop an algorithm for target MRI sequences prediction with high accuracy, and provide more information for clinical diagnosis. METHODS We propose a deep learning-based multi-modal computing model for MRI synthesis with feature disentanglement strategy. To take full advantage of the complementary information provided by different modalities, multi-modal MRI sequences are utilized as input. Notably, the proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information, so that features are extracted separately to effectively process the input data. Subsequently, both of them are fused through the adaptive instance normalization (AdaIN) layer in the decoder. In addition, to address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target with specific information similar to the ground truth. RESULTS To evaluate the synthesis performance, we verify our method on the BRATS2015 dataset of 164 subjects. The experimental results demonstrate our approach significantly outperforms the benchmark method and other state-of-the-art medical image synthesis methods in both quantitative and qualitative measures. Compared with the pix2pixGANs method, the PSNR improves from 23.68 to 24.8. Moreover the ablation studies have also verified the effectiveness of important components of the proposed method. CONCLUSION The proposed method could be effective in prediction of target MRI sequences, and useful for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Yuchen Fei
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Mei Hong
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| |
Collapse
|
16
|
Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ. DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 67:147-160. [PMID: 33658909 PMCID: PMC7763495 DOI: 10.1016/j.inffus.2020.10.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 05/22/2023]
Abstract
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.
Collapse
Affiliation(s)
- Chengjia Wang
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Corresponding author.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
| | | | - Sotirios A. Tsaftaris
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK
| | - David E. Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Calum Gray
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | - Gillian Macnaught
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
17
|
Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T. mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis. Med Image Anal 2021; 70:101944. [PMID: 33690024 DOI: 10.1016/j.media.2020.101944] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 12/11/2020] [Accepted: 12/15/2020] [Indexed: 01/28/2023]
Abstract
Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.
Collapse
Affiliation(s)
- Mahmut Yurt
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Aykut Erdem
- Department of Computer Engineering, Koç University, İstanbul, TR-34450, Turkey
| | - Erkut Erdem
- Department of Computer Engineering, Hacettepe University, Ankara, TR-06800, Turkey
| | - Kader K Oguz
- National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Department of Radiology, Hacettepe University, Ankara, TR-06100, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent, Ankara, TR-06800, Turkey.
| |
Collapse
|
18
|
Tanno R, Worrall DE, Kaden E, Ghosh A, Grussu F, Bizzi A, Sotiropoulos SN, Criminisi A, Alexander DC. Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI. Neuroimage 2021; 225:117366. [DOI: 10.1016/j.neuroimage.2020.117366] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 08/28/2020] [Accepted: 09/05/2020] [Indexed: 12/14/2022] Open
|
19
|
Dai X, Lei Y, Fu Y, Curran WJ, Liu T, Mao H, Yang X. Multimodal MRI synthesis using unified generative adversarial networks. Med Phys 2020; 47:6343-6354. [PMID: 33053202 PMCID: PMC7796974 DOI: 10.1002/mp.14539] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 08/27/2020] [Accepted: 10/01/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
20
|
Kim S, Jang H, Jang J, Lee YH, Hwang D. Deep‐learned short tau inversion recovery imaging using multi‐contrast MR images. Magn Reson Med 2020; 84:2994-3008. [DOI: 10.1002/mrm.28327] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 04/27/2020] [Accepted: 04/27/2020] [Indexed: 01/22/2023]
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Jinseong Jang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Young Han Lee
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS) Yonsei University College of Medicine Seoul Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| |
Collapse
|
21
|
Deep learning segmentation of orbital fat to calibrate conventional MRI for longitudinal studies. Neuroimage 2020; 208:116442. [DOI: 10.1016/j.neuroimage.2019.116442] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Revised: 11/04/2019] [Accepted: 12/03/2019] [Indexed: 01/21/2023] Open
|
22
|
Booth TC, Williams M, Luis A, Cardoso J, Ashkan K, Shuaib H. Machine learning and glioma imaging biomarkers. Clin Radiol 2020; 75:20-32. [PMID: 31371027 PMCID: PMC6927796 DOI: 10.1016/j.crad.2019.07.001] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Accepted: 07/04/2019] [Indexed: 12/14/2022]
Abstract
AIM To review how machine learning (ML) is applied to imaging biomarkers in neuro-oncology, in particular for diagnosis, prognosis, and treatment response monitoring. MATERIALS AND METHODS The PubMed and MEDLINE databases were searched for articles published before September 2018 using relevant search terms. The search strategy focused on articles applying ML to high-grade glioma biomarkers for treatment response monitoring, prognosis, and prediction. RESULTS Magnetic resonance imaging (MRI) is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail. Using carefully chosen image features, ML is frequently used to allow accurate classification in a variety of scenarios. Rather than being chosen by human selection, ML also enables image features to be identified by an algorithm. Much research is applied to determining molecular profiles, histological tumour grade, and prognosis using MRI images acquired at the time that patients first present with a brain tumour. Differentiating a treatment response from a post-treatment-related effect using imaging is clinically important and also an area of active study (described here in one of two Special Issue publications dedicated to the application of ML in glioma imaging). CONCLUSION Although pioneering, most of the evidence is of a low level, having been obtained retrospectively and in single centres. Studies applying ML to build neuro-oncology monitoring biomarker models have yet to show an overall advantage over those using traditional statistical methods. Development and validation of ML models applied to neuro-oncology require large, well-annotated datasets, and therefore multidisciplinary and multi-centre collaborations are necessary.
Collapse
Affiliation(s)
- T C Booth
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK; Department of Neuroradiology, King's College Hospital NHS Foundation Trust, London SE5 9RS, UK.
| | - M Williams
- Department of Neuro-oncology, Imperial College Healthcare NHS Trust, Fulham Palace Rd, London W6 8RF, UK
| | - A Luis
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK; Department of Radiology, St George's University Hospitals NHS Foundation Trust, Blackshaw Road, London SW17 0QT, UK
| | - J Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London SE1 7EH, UK
| | - K Ashkan
- Department of Neurosurgery, King's College Hospital NHS Foundation Trust, London SE5 9RS, UK
| | - H Shuaib
- Department of Medical Physics, Guy's & St. Thomas' NHS Foundation Trust, London SE1 7EH, UK; Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, SE5 8AF, UK
| |
Collapse
|
23
|
Dewey BE, Zhao C, Reinhold JC, Carass A, Fitzgerald KC, Sotirchos ES, Saidha S, Oh J, Pham DL, Calabresi PA, van Zijl PCM, Prince JL. DeepHarmony: A deep learning approach to contrast harmonization across scanner changes. Magn Reson Imaging 2019; 64:160-170. [PMID: 31301354 PMCID: PMC6874910 DOI: 10.1016/j.mri.2019.05.041] [Citation(s) in RCA: 111] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 05/30/2019] [Accepted: 05/30/2019] [Indexed: 11/16/2022]
Abstract
Magnetic resonance imaging (MRI) is a flexible medical imaging modality that often lacks reproducibility between protocols and scanners. It has been shown that even when care is taken to standardize acquisitions, any changes in hardware, software, or protocol design can lead to differences in quantitative results. This greatly impacts the quantitative utility of MRI in multi-site or long-term studies, where consistency is often valued over image quality. We propose a method of contrast harmonization, called DeepHarmony, which uses a U-Net-based deep learning architecture to produce images with consistent contrast. To provide training data, a small overlap cohort (n = 8) was scanned using two different protocols. Images harmonized with DeepHarmony showed significant improvement in consistency of volume quantification between scanning protocols. A longitudinal MRI dataset of patients with multiple sclerosis was also used to evaluate the effect of a protocol change on atrophy calculations in a clinical research setting. The results show that atrophy calculations were substantially and significantly affected by protocol change, whereas such changes have a less significant effect and substantially reduced overall difference when using DeepHarmony. This establishes that DeepHarmony can be used with an overlap cohort to reduce inconsistencies in segmentation caused by changes in scanner protocol, allowing for modernization of hardware and protocol design in long-term studies without invalidating previously acquired data.
Collapse
Affiliation(s)
- Blake E Dewey
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA; Kirby Center for Functional Brain Imaging Research, Kennedy Krieger Institute, Baltimore, MD, USA.
| | - Can Zhao
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jacob C Reinhold
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | - Kathryn C Fitzgerald
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Elias S Sotirchos
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shiv Saidha
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jiwon Oh
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Dzung L Pham
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA; Department of Radiology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA; Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Peter A Calabresi
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Peter C M van Zijl
- Kirby Center for Functional Brain Imaging Research, Kennedy Krieger Institute, Baltimore, MD, USA; Department of Radiology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, 105 Barton Hall, 3400 N. Charles St., Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA; Department of Radiology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
24
|
Son SJ, Park BY, Byeon K, Park H. Synthesizing diffusion tensor imaging from functional MRI using fully convolutional networks. Comput Biol Med 2019; 115:103528. [PMID: 31743880 DOI: 10.1016/j.compbiomed.2019.103528] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/15/2019] [Accepted: 10/28/2019] [Indexed: 12/27/2022]
Abstract
PURPOSE Medical image synthesis can simulate a target modality of interest based on existing modalities and has the potential to save scanning time while contributing to efficient data collection. This study proposed a three-dimensional (3D) deep learning architecture based on a fully convolutional network (FCN) to synthesize diffusion-tensor imaging (DTI) from resting-state functional magnetic resonance imaging (fMRI). METHODS fMRI signals derived from white matter (WM) exist and can be used for assessing WM alterations. We constructed an initial functional correlation tensor image using the correlation patterns of adjacent fMRI voxels as one input to the FCN. We considered T1-weighted images as an additional input to provide an algorithm with the structural information needed to synthesize DTI. Our architecture was trained and tested using a large-scale open database dataset (training n = 648; testing n = 293). RESULTS The average correlation value between synthesized and actual diffusion tensors for 38 WM regions was 0.808, which significantly improves upon an existing study (r = 0.480). We also validated our approach using two open databases. Our proposed method showed a higher correlation with the actual diffusion tensor than the conventional machine-learning method for many WM regions. CONCLUSIONS Our method synthesized DTI images from fMRI images using a 3D FCN architecture. We hope to expand our method of synthesizing various other imaging modalities from a single image source.
Collapse
Affiliation(s)
- Seong-Jin Son
- Department of Electronic and Computer Engineering, Sungkyunkwan University, South Korea; Center for Neuroscience Imaging Research (CNIR), Institute for Basic Science, South Korea; NEUROPHET Inc., South Korea
| | - Bo-Yong Park
- McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada
| | - Kyoungseob Byeon
- Department of Electronic and Computer Engineering, Sungkyunkwan University, South Korea; Center for Neuroscience Imaging Research (CNIR), Institute for Basic Science, South Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research (CNIR), Institute for Basic Science, South Korea; School of Electronic Electrical Engineering, Sungkyunkwan University, South Korea.
| |
Collapse
|
25
|
Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2375-2388. [PMID: 30835216 DOI: 10.1109/tmi.2019.2901750] [Citation(s) in RCA: 219] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.
Collapse
|
26
|
Zöllei L, Jaimes C, Saliba E, Grant PE, Yendiki A. TRActs constrained by UnderLying INfant anatomy (TRACULInA): An automated probabilistic tractography tool with anatomical priors for use in the newborn brain. Neuroimage 2019; 199:1-17. [PMID: 31132451 PMCID: PMC6688923 DOI: 10.1016/j.neuroimage.2019.05.051] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 05/14/2019] [Accepted: 05/18/2019] [Indexed: 10/26/2022] Open
Abstract
The ongoing myelination of white-matter fiber bundles plays a significant role in brain development. However, reliable and consistent identification of these bundles from infant brain MRIs is often challenging due to inherently low diffusion anisotropy, as well as motion and other artifacts. In this paper we introduce a new tool for automated probabilistic tractography specifically designed for newborn infants. Our tool incorporates prior information about the anatomical neighborhood of white-matter pathways from a training data set. In our experiments, we evaluate this tool on data from both full-term and prematurely born infants and demonstrate that it can reconstruct known white-matter tracts in both groups robustly, even in the presence of differences between the training set and study subjects. Additionally, we evaluate it on a publicly available large data set of healthy term infants (UNC Early Brain Development Program). This paves the way for performing a host of sophisticated analyses in newborns that we have previously implemented for the adult brain, such as pointwise analysis along tracts and longitudinal analysis, in both health and disease.
Collapse
Affiliation(s)
- Lilla Zöllei
- Massachusetts General Hospital, Boston, United States.
| | | | | | | | | |
Collapse
|
27
|
Jog A, Hoopes A, Greve DN, Van Leemput K, Fischl B. PSACNN: Pulse sequence adaptive fast whole brain segmentation. Neuroimage 2019; 199:553-569. [PMID: 31129303 PMCID: PMC6688920 DOI: 10.1016/j.neuroimage.2019.05.033] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 01/07/2023] Open
Abstract
With the advent of convolutional neural networks (CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging (MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1-weighted and T2-weighted contrasts with only T1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results (overall Dice overlap=0.94), with a fast run time (≈ 45 s), and consistent across a wide range of acquisition protocols.
Collapse
Affiliation(s)
- Amod Jog
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States.
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States
| | - Douglas N Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States
| | - Koen Van Leemput
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Health Technology, Technical University of Denmark, Denmark
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States; Division of Health Sciences and Technology and Engineering and Computer Science MIT, Cambridge, MA, United States
| |
Collapse
|
28
|
Hazra A, Reich BJ, Reich DS, Shinohara RT, Staicu AM. A Spatio-Temporal Model for Longitudinal Image-on-Image Regression. STATISTICS IN BIOSCIENCES 2019; 11:22-46. [PMID: 31156722 PMCID: PMC6537615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Neurologists and radiologists often use magnetic resonance imaging (MRI) in the management of subjects with multiple sclerosis (MS) because it is sensitive to inflammatory and demyelinative changes in the white matter of the brain and spinal cord. Two conventional modalities used for identifying lesions are T1-weighted (T1) and T2-weighted fluid-attenuated inversion recovery (FLAIR) imaging, which are used clinically and in research studies. Magnetization transfer ratio (MTR), which is available only in research settings, is an advanced MRI modality that has been used extensively for measuring disease-related demyelination both in white matter lesions as well across normal-appearing white matter. Acquiring MTR is not standard in clinical practice, due to the increased scan time and cost. Hence, prediction of MTR based on the modalities T1 and FLAIR could have great impact on the availability of these promising measures for improved patient management. We propose a spatio-temporal regression model for image response and image predictors that are acquired longitudinally, with images being co-registered within the subject but not across subjects. The model is additive, with the response at a voxel being dependent on the available covariates not only through the current voxel but also on the imaging information from the voxels within a neighboring spatial region as well as their temporal gradients. We propose a dynamic Bayesian estimation procedure that updates the parameters of the subject-specific regression model as data accummulates. To bypass the computational challenges associated with a Bayesian approach for high-dimensional imaging data, we propose an approximate Bayesian inference technique. We assess the model fitting and the prediction performance using longitudinally acquired MRI images from 46 MS patients.
Collapse
Affiliation(s)
- Arnab Hazra
- North Carolina State University, Raleigh, NC, USA
| | | | - Daniel S Reich
- National Institute of Neurological Disorders and Stroke, Bethesda, MD, USA
| | | | | |
Collapse
|
29
|
Han S, He Y, Carass A, Ying SH, Prince JL. Cerebellum Parcellation with Convolutional Neural Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949:109490K. [PMID: 32394999 PMCID: PMC7211767 DOI: 10.1117/12.2512119] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
To better understand cerebellum-related diseases and functional mapping of the cerebellum, quantitative measurements of cerebellar regions in magnetic resonance (MR) images have been studied in both clinical and neurological studies. Such studies have revealed that different spinocerebellar ataxia (SCA) subtypes have different patterns of cerebellar atrophy and that atrophy of different cerebellar regions is correlated with specific functional losses. Previous methods to automatically parcellate the cerebellum-that is, to identify its sub-regions-have been largely based on multi-atlas segmentation. Recently, deep convolutional neural network (CNN) algorithms have been shown to have high speed and accuracy in cerebral sub-cortical structure segmentation from MR images. In this work, two three-dimensional CNNs were used to parcellate the cerebellum into 28 regions. First, a locating network was used to predict a bounding box around the cerebellum. Second, a parcellating network was used to parcellate the cerebellum using the entire region within the bounding box. A leave-one-out cross validation of fifteen manually delineated images was performed. Compared with a previously reported state-of-the-art algorithm, the proposed algorithm shows superior Dice coefficients. The proposed algorithm was further applied to three MR images of a healthy subject and subjects with SCA6 and SCA8, respectively. A Singularity container of this algorithm is publicly available.
Collapse
Affiliation(s)
- Shuo Han
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 20892, USA
| | - Yufan He
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Sarah H. Ying
- Department of Neurology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Jerry L. Prince
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
30
|
Chartsias A, Joyce T, Giuffrida MV, Tsaftaris SA. Multimodal MR Synthesis via Modality-Invariant Latent Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:803-814. [PMID: 29053447 PMCID: PMC5904017 DOI: 10.1109/tmi.2017.2764326] [Citation(s) in RCA: 116] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.
Collapse
Affiliation(s)
| | | | - Mario Valerio Giuffrida
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| | - Sotirios A. Tsaftaris
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| |
Collapse
|
31
|
Hazra A, Reich BJ, Reich DS, Shinohara RT, Staicu AM. A Spatio-Temporal Model for Longitudinal Image-on-Image Regression. STATISTICS IN BIOSCIENCES 2017. [DOI: 10.1007/s12561-017-9206-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
32
|
Jiang Y, Liu F, Fan M, Li X, Zhao Z, Zeng Z, Wang Y, Xu D. Deducing magnetic resonance neuroimages based on knowledge from samples. Comput Med Imaging Graph 2017; 62:1-14. [PMID: 28807363 DOI: 10.1016/j.compmedimag.2017.07.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 05/25/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Because individual variance always exists, using the same set of predetermined parameters for magnetic resonance imaging (MRI) may not be exactly suitable for each participant. We propose a knowledge-based method that can repair MRI data of undesired contrast as if a new scan were acquired using imaging parameters that had been individually optimized. METHODS The method employed a strategy called analogical reasoning to deduce voxel-wise relaxation properties using morphological and biological similarity. The proposed framework involves steps of intensity normalization, tissue segmentation, relaxation time deducing, and image deducing. RESULTS This approach has been preliminarily validated using conventional MRI data at 3T from several examples, including 5 normal and 9 clinical datasets. It can effectively improve the contrast of real MRI data by deducing imaging data using optimized imaging parameters based on deduced relaxation properties. The statistics of deduced images shows a high correlation with real data that were actually collected using the same set of imaging parameters. CONCLUSION The proposed method of deducing MRI data using knowledge of relaxation times alternatively provides a way of repairing MRI data of less optimal contrast. The method is also capable of optimizing an MRI protocol for individual participants, thereby realizing personalized MR imaging.
Collapse
Affiliation(s)
- Yuwei Jiang
- Shanghai Key Laboratory of Magnetic Resonance, MOE & Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, PR China; Department of Psychiatry, Columbia University & Molecular Imaging and Neuropathology Division, New York State Psychiatric Institute, New York, 10032, USA
| | - Feng Liu
- Department of Psychiatry, Columbia University & Molecular Imaging and Neuropathology Division, New York State Psychiatric Institute, New York, 10032, USA
| | - Mingxia Fan
- Shanghai Key Laboratory of Magnetic Resonance, MOE & Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, PR China
| | - Xuzhou Li
- Shanghai Key Laboratory of Magnetic Resonance, MOE & Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, PR China; Department of Psychiatry, Columbia University & Molecular Imaging and Neuropathology Division, New York State Psychiatric Institute, New York, 10032, USA
| | - Zhiyong Zhao
- Shanghai Key Laboratory of Magnetic Resonance, MOE & Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, East China Normal University, Shanghai 200062, PR China
| | - Zhaoling Zeng
- Shanghai University of Electric Power, Shanghai 200090, PR China
| | - Yi Wang
- MRI Research Institute, Radiology Department, Cornell University, New York, NY 10012, USA
| | - Dongrong Xu
- Department of Psychiatry, Columbia University & Molecular Imaging and Neuropathology Division, New York State Psychiatric Institute, New York, 10032, USA.
| |
Collapse
|
33
|
Alexander DC, Zikic D, Ghosh A, Tanno R, Wottschel V, Zhang J, Kaden E, Dyrby TB, Sotiropoulos SN, Zhang H, Criminisi A. Image quality transfer and applications in diffusion MRI. Neuroimage 2017; 152:283-298. [DOI: 10.1016/j.neuroimage.2017.02.089] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 02/22/2017] [Accepted: 02/28/2017] [Indexed: 01/03/2023] Open
|
34
|
Roy S, Butman JA, Pham DL. Robust skull stripping using multiple MR image contrasts insensitive to pathology. Neuroimage 2017; 146:132-147. [PMID: 27864083 PMCID: PMC5321800 DOI: 10.1016/j.neuroimage.2016.11.017] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 10/31/2016] [Accepted: 11/04/2016] [Indexed: 01/18/2023] Open
Abstract
Automatic skull-stripping or brain extraction of magnetic resonance (MR) images is often a fundamental step in many neuroimage processing pipelines. The accuracy of subsequent image processing relies on the accuracy of the skull-stripping. Although many automated stripping methods have been proposed in the past, it is still an active area of research particularly in the context of brain pathology. Most stripping methods are validated on T1-w MR images of normal brains, especially because high resolution T1-w sequences are widely acquired and ground truth manual brain mask segmentations are publicly available for normal brains. However, different MR acquisition protocols can provide complementary information about the brain tissues, which can be exploited for better distinction between brain, cerebrospinal fluid, and unwanted tissues such as skull, dura, marrow, or fat. This is especially true in the presence of pathology, where hemorrhages or other types of lesions can have similar intensities as skull in a T1-w image. In this paper, we propose a sparse patch based Multi-cONtrast brain STRipping method (MONSTR),2 where non-local patch information from one or more atlases, which contain multiple MR sequences and reference delineations of brain masks, are combined to generate a target brain mask. We compared MONSTR with four state-of-the-art, publicly available methods: BEaST, SPECTRE, ROBEX, and OptiBET. We evaluated the performance of these methods on 6 datasets consisting of both healthy subjects and patients with various pathologies. Three datasets (ADNI, MRBrainS, NAMIC) are publicly available, consisting of 44 healthy volunteers and 10 patients with schizophrenia. Other three in-house datasets, comprising 87 subjects in total, consisted of patients with mild to severe traumatic brain injury, brain tumors, and various movement disorders. A combination of T1-w, T2-w were used to skull-strip these datasets. We show significant improvement in stripping over the competing methods on both healthy and pathological brains. We also show that our multi-contrast framework is robust and maintains accurate performance across different types of acquisitions and scanners, even when using normal brains as atlases to strip pathological brains, demonstrating that our algorithm is applicable even when reference segmentations of pathological brains are not available to be used as atlases.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States.
| | - John A Butman
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States; Diagnostic Radiology Department, National Institute of Health, United States
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States
| |
Collapse
|
35
|
Jog A, Carass A, Roy S, Pham DL, Prince JL. Random forest regression for magnetic resonance image synthesis. Med Image Anal 2017; 35:475-488. [PMID: 27607469 PMCID: PMC5099106 DOI: 10.1016/j.media.2016.08.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 08/24/2016] [Accepted: 08/26/2016] [Indexed: 02/02/2023]
Abstract
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T2-weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T2-weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets.
Collapse
Affiliation(s)
- Amod Jog
- Dept. of Computer Science, The Johns Hopkins University, United States.
| | - Aaron Carass
- Dept. of Computer Science, The Johns Hopkins University, United States; Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| | - Snehashis Roy
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Dzung L Pham
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| |
Collapse
|
36
|
Patch Based Synthesis of Whole Head MR Images: Application to EPI Distortion Correction. ACTA ACUST UNITED AC 2016; 9968:146-156. [PMID: 28367541 DOI: 10.1007/978-3-319-46630-9_15] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
Different magnetic resonance imaging pulse sequences are used to generate image contrasts based on physical properties of tissues, which provide different and often complementary information about them. Therefore multiple image contrasts are useful for multimodal analysis of medical images. Often, medical image processing algorithms are optimized for particular image contrasts. If a desirable contrast is unavailable, contrast synthesis (or modality synthesis) methods try to "synthesize" the unavailable constrasts from the available ones. Most of the recent image synthesis methods generate synthetic brain images, while whole head magnetic resonance (MR) images can also be useful for many applications. We propose an atlas based patch matching algorithm to synthesize T2-w whole head (including brain, skull, eyes etc) images from T1-w images for the purpose of distortion correction of diffusion weighted MR images. The geometric distortion in diffusion MR images due to in-homogeneous B0 magnetic field are often corrected by non-linearly registering the corresponding b = 0 image with zero diffusion gradient to an undistorted T2-w image. We show that our synthetic T2-w images can be used as a template in absence of a real T2-w image. Our patch based method requires multiple atlases with T1 and T2 to be registeLowRes to a given target T1. Then for every patch on the target, multiple similar looking matching patches are found on the atlas T1 images and corresponding patches on the atlas T2 images are combined to generate a synthetic T2 of the target. We experimented on image data obtained from 44 patients with traumatic brain injury (TBI), and showed that our synthesized T2 images produce more accurate distortion correction than a state-of-the-art registration based image synthesis method.
Collapse
|
37
|
Huo Y, Plassard AJ, Carass A, Resnick SM, Pham DL, Prince JL, Landman BA. Consistent cortical reconstruction and multi-atlas brain segmentation. Neuroimage 2016; 138:197-210. [PMID: 27184203 DOI: 10.1016/j.neuroimage.2016.05.030] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 05/10/2016] [Indexed: 01/14/2023] Open
Abstract
Whole brain segmentation and cortical surface reconstruction are two essential techniques for investigating the human brain. Spatial inconsistences, which can hinder further integrated analyses of brain structure, can result due to these two tasks typically being conducted independently of each other. FreeSurfer obtains self-consistent whole brain segmentations and cortical surfaces. It starts with subcortical segmentation, then carries out cortical surface reconstruction, and ends with cortical segmentation and labeling. However, this "segmentation to surface to parcellation" strategy has shown limitations in various cohorts such as older populations with large ventricles. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. A modification called MaCRUISE(+) is designed to perform well when white matter lesions are present. Comparing to the benchmarks CRUISE and FreeSurfer, the surface accuracy of MaCRUISE and MaCRUISE(+) is validated using two independent datasets with expertly placed cortical landmarks. A third independent dataset with expertly delineated volumetric labels is employed to compare segmentation performance. Finally, 200MR volumetric images from an older adult sample are used to assess the robustness of MaCRUISE and FreeSurfer. The advantages of MaCRUISE are: (1) MaCRUISE constructs self-consistent voxelwise segmentations and cortical surfaces, while MaCRUISE(+) is robust to white matter pathology. (2) MaCRUISE achieves more accurate whole brain segmentations than independently conducting the multi-atlas segmentation. (3) MaCRUISE is comparable in accuracy to FreeSurfer (when FreeSurfer does not exhibit global failures) while achieving greater robustness across an older adult population. MaCRUISE has been made freely available in open source.
Collapse
Affiliation(s)
- Yuankai Huo
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA.
| | | | - Aaron Carass
- Image Analysis and Communications Laboratory, Johns Hopkins University, Baltimore, MD, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation, Bethesda, MD, USA
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, Johns Hopkins University, Baltimore, MD, USA
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA; Computer Science, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
38
|
Gómez PA, Molina-Romero M, Ulas C, Bounincontri G, Sperl JI, Jones DK, Menzel MI, Menze BH. Simultaneous Parameter Mapping, Modality Synthesis, and Anatomical Labeling of the Brain with MR Fingerprinting. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-46726-9_67] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|