1
|
Zhang X, He X, Guo J, Ettehadi N, Aw N, Semanek D, Posner J, Laine A, Wang Y. PTNet3D: A 3D High-Resolution Longitudinal Infant Brain MRI Synthesizer Based on Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2925-2940. [PMID: 35560070 PMCID: PMC9529847 DOI: 10.1109/tmi.2022.3174827] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
An increased interest in longitudinal neurodevelopment during the first few years after birth has emerged in recent years. Noninvasive magnetic resonance imaging (MRI) can provide crucial information about the development of brain structures in the early months of life. Despite the success of MRI collections and analysis for adults, it remains a challenge for researchers to collect high-quality multimodal MRIs from developing infant brains because of their irregular sleep pattern, limited attention, inability to follow instructions to stay still during scanning. In addition, there are limited analytic approaches available. These challenges often lead to a significant reduction of usable MRI scans and pose a problem for modeling neurodevelopmental trajectories. Researchers have explored solving this problem by synthesizing realistic MRIs to replace corrupted ones. Among synthesis methods, the convolutional neural network-based (CNN-based) generative adversarial networks (GANs) have demonstrated promising performance. In this study, we introduced a novel 3D MRI synthesis framework- pyramid transformer network (PTNet3D)- which relies on attention mechanisms through transformer and performer layers. We conducted extensive experiments on high-resolution Developing Human Connectome Project (dHCP) and longitudinal Baby Connectome Project (BCP) datasets. Compared with CNN-based GANs, PTNet3D consistently shows superior synthesis accuracy and superior generalization on two independent, large-scale infant brain MRI datasets. Notably, we demonstrate that PTNet3D synthesized more realistic scans than CNN-based models when the input is from multi-age subjects. Potential applications of PTNet3D include synthesizing corrupted or missing images. By replacing corrupted scans with synthesized ones, we observed significant improvement in infant whole brain segmentation.
Collapse
|
2
|
Kontopodis EE, Papadaki E, Trivizakis E, Maris TG, Simos P, Papadakis GZ, Tsatsakis A, Spandidos DA, Karantanas A, Marias K. Emerging deep learning techniques using magnetic resonance imaging data applied in multiple sclerosis and clinical isolated syndrome patients (Review). Exp Ther Med 2021; 22:1149. [PMID: 34504594 PMCID: PMC8393268 DOI: 10.3892/etm.2021.10583] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 07/29/2021] [Indexed: 12/18/2022] Open
Abstract
Computer-aided diagnosis systems aim to assist clinicians in the early identification of abnormal signs in order to optimize the interpretation of medical images and increase diagnostic precision. Multiple sclerosis (MS) and clinically isolated syndrome (CIS) are chronic inflammatory, demyelinating diseases affecting the central nervous system. Recent advances in deep learning (DL) techniques have led to novel computational paradigms in MS and CIS imaging designed for automatic segmentation and detection of areas of interest and automatic classification of anatomic structures, as well as optimization of neuroimaging protocols. To this end, there are several publications presenting artificial intelligence-based predictive models aiming to increase diagnostic accuracy and to facilitate optimal clinical management in patients diagnosed with MS and/or CIS. The current study presents a thorough review covering DL techniques that have been applied in MS and CIS during recent years, shedding light on their current advances and limitations.
Collapse
Affiliation(s)
- Eleftherios E Kontopodis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Efrosini Papadaki
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Eleftherios Trivizakis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Thomas G Maris
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Panagiotis Simos
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Psychiatry and Behavioral Sciences, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Georgios Z Papadakis
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Aristidis Tsatsakis
- Centre of Toxicology Science and Research, Faculty of Medicine, University of Crete, 71003 Heraklion, Greece
| | - Demetrios A Spandidos
- Laboratory of Clinical Virology, Medical School, University of Crete, 71003 Heraklion, Greece
| | - Apostolos Karantanas
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Radiology, Medical School, University of Crete, 70013 Heraklion, Greece
| | - Kostas Marias
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, 70013 Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| |
Collapse
|
3
|
Wei W, Poirion E, Bodini B, Durrleman S, Colliot O, Stankoff B, Ayache N. Fluid-attenuated inversion recovery MRI synthesis from multisequence MRI using three-dimensional fully convolutional networks for multiple sclerosis. J Med Imaging (Bellingham) 2019; 6:014005. [PMID: 30820439 DOI: 10.1117/1.jmi.6.1.014005] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 01/29/2019] [Indexed: 11/14/2022] Open
Abstract
Multiple sclerosis (MS) is a white matter (WM) disease characterized by the formation of WM lesions, which can be visualized by magnetic resonance imaging (MRI). The fluid-attenuated inversion recovery (FLAIR) MRI pulse sequence is used clinically and in research for the detection of WM lesions. However, in clinical settings, some MRI pulse sequences could be missed because of various constraints. The use of the three-dimensional fully convolutional neural networks is proposed to predict FLAIR pulse sequences from other MRI pulse sequences. In addition, the contribution of each input pulse sequence is evaluated with a pulse sequence-specific saliency map. This approach is tested on a real MS image dataset and evaluated by comparing this approach with other methods and by assessing the lesion contrast in the synthetic FLAIR pulse sequence. Both the qualitative and quantitative results show that this method is competitive for FLAIR synthesis.
Collapse
Affiliation(s)
- Wen Wei
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France.,Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Emilie Poirion
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Benedetta Bodini
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Stanley Durrleman
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Olivier Colliot
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Bruno Stankoff
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France
| |
Collapse
|
4
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
5
|
Jog A, Carass A, Roy S, Pham DL, Prince JL. Random forest regression for magnetic resonance image synthesis. Med Image Anal 2017; 35:475-488. [PMID: 27607469 PMCID: PMC5099106 DOI: 10.1016/j.media.2016.08.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 08/24/2016] [Accepted: 08/26/2016] [Indexed: 02/02/2023]
Abstract
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T2-weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T2-weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets.
Collapse
Affiliation(s)
- Amod Jog
- Dept. of Computer Science, The Johns Hopkins University, United States.
| | - Aaron Carass
- Dept. of Computer Science, The Johns Hopkins University, United States; Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| | - Snehashis Roy
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Dzung L Pham
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| |
Collapse
|
6
|
Huynh T, Gao Y, Kang J, Wang L, Zhang P, Lian J, Shen D. Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:174-83. [PMID: 26241970 PMCID: PMC4703527 DOI: 10.1109/tmi.2015.2461533] [Citation(s) in RCA: 157] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
Collapse
Affiliation(s)
- Tri Huynh
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Computer Science, and also with the IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jiayin Kang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Li Wang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Pei Zhang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-071, Korea
| |
Collapse
|
7
|
MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 2015; 24:63-76. [PMID: 26072167 DOI: 10.1016/j.media.2015.05.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 02/21/2015] [Accepted: 05/04/2015] [Indexed: 01/24/2023]
Abstract
Automatic processing of magnetic resonance images is a vital part of neuroscience research. Yet even the best and most widely used medical image processing methods will not produce consistent results when their input images are acquired with different pulse sequences. Although intensity standardization and image synthesis methods have been introduced to address this problem, their performance remains dependent on knowledge and consistency of the pulse sequences used to acquire the images. In this paper, an image synthesis approach that first estimates the pulse sequence parameters of the subject image is presented. The estimated parameters are then used with a collection of atlas or training images to generate a new atlas image having the same contrast as the subject image. This additional image provides an ideal source from which to synthesize any other target pulse sequence image contained in the atlas. In particular, a nonlinear regression intensity mapping is trained from the new atlas image to the target atlas image and then applied to the subject image to yield the particular target pulse sequence within the atlas. Both intensity standardization and synthesis of missing tissue contrasts can be achieved using this framework. The approach was evaluated on both simulated and real data, and shown to be superior in both intensity standardization and synthesis to other established methods.
Collapse
|