1
|
Zhao B, Zhou Y, Zong X. Effects of prospective motion correction on perivascular spaces at 7T MRI evaluated using motion artifact simulation. Magn Reson Med 2024; 92:1079-1094. [PMID: 38651650 PMCID: PMC11209793 DOI: 10.1002/mrm.30126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/12/2024] [Accepted: 04/04/2024] [Indexed: 04/25/2024]
Abstract
PURPOSE The effectiveness of prospective motion correction (PMC) is often evaluated by comparing artifacts in images acquired with and without PMC (NoPMC). However, such an approach is not applicable in clinical setting due to unavailability of NoPMC images. We aim to develop a simulation approach for demonstrating the ability of fat-navigator-based PMC in improving perivascular space (PVS) visibility in T2-weighted MRI. METHODS MRI datasets from two earlier studies were used for motion artifact simulation and evaluating PMC, including T2-weighted NoPMC and PMC images. To simulate motion artifacts, k-space data at motion-perturbed positions were calculated from artifact-free images using nonuniform Fourier transform and misplaced onto the Cartesian grid before inverse Fourier transform. The simulation's ability to reproduce motion-induced blurring, ringing, and ghosting artifacts was evaluated using sharpness at lateral ventricle/white matter boundary, ringing artifact magnitude in the Fourier spectrum, and background noise, respectively. PVS volume fraction in white matter was employed to reflect its visibility. RESULTS In simulation, sharpness, PVS volume fraction, and background noise exhibited significant negative correlations with motion score. Significant correlations were found in sharpness, ringing artifact magnitude, and PVS volume fraction between simulated and real NoPMC images (p ≤ 0.006). In contrast, such correlations were reduced and nonsignificant between simulated and real PMC images (p ≥ 0.48), suggesting reduction of motion effects with PMC. CONCLUSIONS The proposed simulation approach is an effective tool to study the effects of motion and PMC on PVS visibility. PMC may reduce the systematic bias of PVS volume fraction caused by motion artifacts.
Collapse
Affiliation(s)
- Bingbing Zhao
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China
| | - Yichen Zhou
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China
| | - Xiaopeng Zong
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China
| |
Collapse
|
2
|
Belton N, Hagos MT, Lawlor A, Curran KM. Towards a unified approach for unsupervised brain MRI Motion Artefact Detection with few shot Anomaly Detection. Comput Med Imaging Graph 2024; 115:102391. [PMID: 38718561 DOI: 10.1016/j.compmedimag.2024.102391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/19/2024] [Accepted: 04/26/2024] [Indexed: 06/03/2024]
Abstract
Automated Motion Artefact Detection (MAD) in Magnetic Resonance Imaging (MRI) is a field of study that aims to automatically flag motion artefacts in order to prevent the requirement for a repeat scan. In this paper, we identify and tackle the three current challenges in the field of automated MAD; (1) reliance on fully-supervised training, meaning they require specific examples of Motion Artefacts (MA), (2) inconsistent use of benchmark datasets across different works and use of private datasets for testing and training of newly proposed MAD techniques and (3) a lack of sufficiently large datasets for MRI MAD. To address these challenges, we demonstrate how MAs can be identified by formulating the problem as an unsupervised Anomaly Detection (AD) task. We compare the performance of three State-of-the-Art AD algorithms DeepSVDD, Interpolated Gaussian Descriptor and FewSOME on two open-source Brain MRI datasets on the task of MAD and MA severity classification, with FewSOME achieving a MAD AUC >90% on both datasets and a Spearman Rank Correlation Coefficient of 0.8 on the task of MA severity classification. These models are trained in the few shot setting, meaning large Brain MRI datasets are not required to build robust MAD algorithms. This work also sets a standard protocol for testing MAD algorithms on open-source benchmark datasets. In addition to addressing these challenges, we demonstrate how our proposed 'anomaly-aware' scoring function improves FewSOME's MAD performance in the setting where one and two shots of the anomalous class are available for training. Code available at https://github.com/niamhbelton/Unsupervised-Brain-MRI-Motion-Artefact-Detection/.
Collapse
Affiliation(s)
- Niamh Belton
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Medicine, University College Dublin, Ireland.
| | - Misgina Tsighe Hagos
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Computer Science, University College Dublin, Ireland
| | - Aonghus Lawlor
- School of Computer Science, University College Dublin, Ireland; Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland
| | - Kathleen M Curran
- Science Foundation Ireland Centre for Research Training in Machine Learning, Ireland; School of Medicine, University College Dublin, Ireland
| |
Collapse
|
3
|
Silic M, Tam F, Graham SJ. Test Platform for Developing New Optical Position Tracking Technology towards Improved Head Motion Correction in Magnetic Resonance Imaging. SENSORS (BASEL, SWITZERLAND) 2024; 24:3737. [PMID: 38931521 PMCID: PMC11207598 DOI: 10.3390/s24123737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/03/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024]
Abstract
Optical tracking of head pose via fiducial markers has been proven to enable effective correction of motion artifacts in the brain during magnetic resonance imaging but remains difficult to implement in the clinic due to lengthy calibration and set up times. Advances in deep learning for markerless head pose estimation have yet to be applied to this problem because of the sub-millimetre spatial resolution required for motion correction. In the present work, two optical tracking systems are described for the development and training of a neural network: one marker-based system (a testing platform for measuring ground truth head pose) with high tracking fidelity to act as the training labels, and one markerless deep-learning-based system using images of the markerless head as input to the network. The markerless system has the potential to overcome issues of marker occlusion, insufficient rigid attachment of the marker, lengthy calibration times, and unequal performance across degrees of freedom (DOF), all of which hamper the adoption of marker-based solutions in the clinic. Detail is provided on the development of a custom moiré-enhanced fiducial marker for use as ground truth and on the calibration procedure for both optical tracking systems. Additionally, the development of a synthetic head pose dataset is described for the proof of concept and initial pre-training of a simple convolutional neural network. Results indicate that the ground truth system has been sufficiently calibrated and can track head pose with an error of <1 mm and <1°. Tracking data of a healthy, adult participant are shown. Pre-training results show that the average root-mean-squared error across the 6 DOF is 0.13 and 0.36 (mm or degrees) on a head model included and excluded from the training dataset, respectively. Overall, this work indicates excellent feasibility of the deep-learning-based approach and will enable future work in training and testing on a real dataset in the MRI environment.
Collapse
Affiliation(s)
- Marina Silic
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada; (M.S.); (F.T.)
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
| | - Fred Tam
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada; (M.S.); (F.T.)
| | - Simon J. Graham
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada; (M.S.); (F.T.)
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
| |
Collapse
|
4
|
Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Phys Med Biol 2024; 69:115057. [PMID: 38714192 DOI: 10.1088/1361-6560/ad4845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/07/2024] [Indexed: 05/09/2024]
Abstract
Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, United States of America
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, MS, United States of America
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| |
Collapse
|
5
|
Hewlett M, Petrov I, Johnson PM, Drangova M. Deep-learning-based motion correction using multichannel MRI data: a study using simulated artifacts in the fastMRI dataset. NMR IN BIOMEDICINE 2024:e5179. [PMID: 38808752 DOI: 10.1002/nbm.5179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 04/21/2024] [Accepted: 04/29/2024] [Indexed: 05/30/2024]
Abstract
Deep learning presents a generalizable solution for motion correction requiring no pulse sequence modifications or additional hardware, but previous networks have all been applied to coil-combined data. Multichannel MRI data provide a degree of spatial encoding that may be useful for motion correction. We hypothesize that incorporating deep learning for motion correction prior to coil combination will improve results. A conditional generative adversarial network was trained using simulated rigid motion artifacts in brain images acquired at multiple sites with multiple contrasts (not limited to healthy subjects). We compared the performance of deep-learning-based motion correction on individual channel images (single-channel model) with that performed after coil combination (channel-combined model). We also investigate simultaneous motion correction of all channel data from an image volume (multichannel model). The single-channel model significantly (p < 0.0001) improved mean absolute error, with an average 50.9% improvement compared with the uncorrected images. This was significantly (p < 0.0001) better than the 36.3% improvement achieved by the channel-combined model (conventional approach). The multichannel model provided no significant improvement in quantitative measures of image quality compared with the uncorrected images. Results were independent of the presence of pathology, and generalizable to a new center unseen during training. Performing motion correction on single-channel images prior to coil combination provided an improvement in performance compared with conventional deep-learning-based motion correction. Improved deep learning methods for retrospective correction of motion-affected MR images could reduce the need for repeat scans if applied in a clinical setting.
Collapse
Affiliation(s)
- Miriam Hewlett
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| | - Ivailo Petrov
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| | - Patricia M Johnson
- Department of Radiology, New York Medicine School of Medicine, New York, New York, USA
| | - Maria Drangova
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
6
|
Hossain MB, Shinde RK, Imtiaz SM, Hossain FMF, Jeon SH, Kwon KC, Kim N. Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction. Int J Biomed Imaging 2024; 2024:8972980. [PMID: 38725808 PMCID: PMC11081754 DOI: 10.1155/2024/8972980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 04/08/2024] [Accepted: 04/23/2024] [Indexed: 05/12/2024] Open
Abstract
We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.
Collapse
Affiliation(s)
- Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - F. M. Fahmid Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Seok-Hee Jeon
- Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
| | - Ki-Chul Kwon
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| |
Collapse
|
7
|
Olsson H, Millward JM, Starke L, Gladytz T, Klein T, Fehr J, Lai WC, Lippert C, Niendorf T, Waiczies S. Simulating rigid head motion artifacts on brain magnitude MRI data-Outcome on image quality and segmentation of the cerebral cortex. PLoS One 2024; 19:e0301132. [PMID: 38626138 PMCID: PMC11020361 DOI: 10.1371/journal.pone.0301132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 03/11/2024] [Indexed: 04/18/2024] Open
Abstract
Magnetic Resonance Imaging (MRI) datasets from epidemiological studies often show a lower prevalence of motion artifacts than what is encountered in clinical practice. These artifacts can be unevenly distributed between subject groups and studies which introduces a bias that needs addressing when augmenting data for machine learning purposes. Since unreconstructed multi-channel k-space data is typically not available for population-based MRI datasets, motion simulations must be performed using signal magnitude data. There is thus a need to systematically evaluate how realistic such magnitude-based simulations are. We performed magnitude-based motion simulations on a dataset (MR-ART) from 148 subjects in which real motion-corrupted reference data was also available. The similarity of real and simulated motion was assessed by using image quality metrics (IQMs) including Coefficient of Joint Variation (CJV), Signal-to-Noise-Ratio (SNR), and Contrast-to-Noise-Ratio (CNR). An additional comparison was made by investigating the decrease in the Dice-Sørensen Coefficient (DSC) of automated segmentations with increasing motion severity. Segmentation of the cerebral cortex was performed with 6 freely available tools: FreeSurfer, BrainSuite, ANTs, SAMSEG, FastSurfer, and SynthSeg+. To better mimic the real subject motion, the original motion simulation within an existing data augmentation framework (TorchIO), was modified. This allowed a non-random motion paradigm and phase encoding direction. The mean difference in CJV/SNR/CNR between the real motion-corrupted images and our modified simulations (0.004±0.054/-0.7±1.8/-0.09±0.55) was lower than that of the original simulations (0.015±0.061/0.2±2.0/-0.29±0.62). Further, the mean difference in the DSC between the real motion-corrupted images was lower for our modified simulations (0.03±0.06) compared to the original simulations (-0.15±0.09). SynthSeg+ showed the highest robustness towards all forms of motion, real and simulated. In conclusion, reasonably realistic synthetic motion artifacts can be induced on a large-scale when only magnitude MR images are available to obtain unbiased data sets for the training of machine learning based models.
Collapse
Affiliation(s)
- Hampus Olsson
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
| | - Jason Michael Millward
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
- Experimental and Clinical Research Center, A Joint Cooperation Between the Charité Medical Faculty and the Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany
| | - Ludger Starke
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
| | - Thomas Gladytz
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
| | - Tobias Klein
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
| | - Jana Fehr
- Digital Health & Machine Learning Group, Hasso Plattner Institute for Digital Engineering, Potsdam, Germany
| | - Wei-Chang Lai
- Digital Health & Machine Learning Group, Hasso Plattner Institute for Digital Engineering, Potsdam, Germany
| | - Christoph Lippert
- Digital Health & Machine Learning Group, Hasso Plattner Institute for Digital Engineering, Potsdam, Germany
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States of America
| | - Thoralf Niendorf
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
- Experimental and Clinical Research Center, A Joint Cooperation Between the Charité Medical Faculty and the Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany
| | - Sonia Waiczies
- Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin Ultrahigh Field Facility (B.U.F.F.), Berlin, Germany
- Experimental and Clinical Research Center, A Joint Cooperation Between the Charité Medical Faculty and the Max-Delbrück-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany
| |
Collapse
|
8
|
Safari M, Yang X, Fatemi A, Archambault L. MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM). Med Phys 2024; 51:2598-2610. [PMID: 38009583 DOI: 10.1002/mp.16844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND High-resolution magnetic resonance imaging (MRI) with excellent soft-tissue contrast is a valuable tool utilized for diagnosis and prognosis. However, MRI sequences with long acquisition time are susceptible to motion artifacts, which can adversely affect the accuracy of post-processing algorithms. PURPOSE This study proposes a novel retrospective motion correction method named "motion artifact reduction using conditional diffusion probabilistic model" (MAR-CDPM). The MAR-CDPM aimed to remove motion artifacts from multicenter three-dimensional contrast-enhanced T1 magnetization-prepared rapid acquisition gradient echo (3D ceT1 MPRAGE) brain dataset with different brain tumor types. MATERIALS AND METHODS This study employed two publicly accessible MRI datasets: one containing 3D ceT1 MPRAGE and 2D T2-fluid attenuated inversion recovery (FLAIR) images from 230 patients with diverse brain tumors, and the other comprising 3D T1-weighted (T1W) MRI images of 148 healthy volunteers, which included real motion artifacts. The former was used to train and evaluate the model using the in silico data, and the latter was used to evaluate the model performance to remove real motion artifacts. A motion simulation was performed in k-space domain to generate an in silico dataset with minor, moderate, and heavy distortion levels. The diffusion process of the MAR-CDPM was then implemented in k-space to convert structure data into Gaussian noise by gradually increasing motion artifact levels. A conditional network with a Unet backbone was trained to reverse the diffusion process to convert the distorted images to structured data. The MAR-CDPM was trained in two scenarios: one conditioning on the time step t $t$ of the diffusion process, and the other conditioning on both t $t$ and T2-FLAIR images. The MAR-CDPM was quantitatively and qualitatively compared with supervised Unet, Unet conditioned on T2-FLAIR, CycleGAN, Pix2pix, and Pix2pix conditioned on T2-FLAIR models. To quantify the spatial distortions and the level of remaining motion artifacts after applying the models, quantitative metrics were reported including normalized mean squared error (NMSE), structural similarity index (SSIM), multiscale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multiscale gradient magnitude similarity deviation (MS-GMSD). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference between the models where p-value < 0.05 $ < 0.05$ was considered statistically significant. RESULTS Qualitatively, MAR-CDPM outperformed these methods in preserving soft-tissue contrast and different brain regions. It also successfully preserved tumor boundaries for heavy motion artifacts, like the supervised method. Our MAR-CDPM recovered motion-free in silico images with the highest PSNR and VIF for all distortion levels where the differences were statistically significant (p-values< 0.05 $< 0.05$ ). In addition, our method conditioned on t and T2-FLAIR outperformed (p-values< 0.05 $< 0.05$ ) other methods to remove motion artifacts from the in silico dataset in terms of NMSE, MS-SSIM, SSIM, and MS-GMSD. Moreover, our method conditioned on only t outperformed generative models (p-values< 0.05 $< 0.05$ ) and had comparable performances compared with the supervised model (p-values> 0.05 $> 0.05$ ) to remove real motion artifacts. CONCLUSIONS The MAR-CDPM could successfully remove motion artifacts from 3D ceT1 MPRAGE. It is particularly beneficial for elderly who may experience involuntary movements during high-resolution MRI imaging with long acquisition times.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, Mississippi, USA
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, Mississippi, USA
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| |
Collapse
|
9
|
Murugesan G, Yu FF, Achilleos M, DeBevits J, Nalawade S, Ganesh C, Wagner B, Madhuranthakam AJ, Maldjian JA. Synthesizing Contrast-Enhanced MR Images from Noncontrast MR Images Using Deep Learning. AJNR Am J Neuroradiol 2024; 45:312-319. [PMID: 38453408 DOI: 10.3174/ajnr.a8107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 12/01/2023] [Indexed: 03/09/2024]
Abstract
BACKGROUND AND PURPOSE Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning. MATERIALS AND METHODS We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent). RESULTS The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm's performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale). CONCLUSIONS We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.
Collapse
Affiliation(s)
- Gowtham Murugesan
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Fang F Yu
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Michael Achilleos
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - John DeBevits
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Sahil Nalawade
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Chandan Ganesh
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Ben Wagner
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | | | - Joseph A Maldjian
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| |
Collapse
|
10
|
Kang SH, Lee Y. Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images. Bioengineering (Basel) 2024; 11:227. [PMID: 38534500 DOI: 10.3390/bioengineering11030227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/20/2024] [Accepted: 02/23/2024] [Indexed: 03/28/2024] Open
Abstract
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The volume data were rotated and translated with random intensity and frequency, in three dimensions, and were iterated as the number of slices in the volume data. Then, for every slice, a portion of the motion-free k-space data was replaced with motion k-space data, respectively. In addition, based on the transposed k-space data, we acquired MR images with motion artifacts and residual maps and constructed datasets. For a quantitative evaluation, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), coefficient of correlation (CC), and universal image quality index (UQI) were measured. The U-Net models for motion artifact reduction with the residual map-based dataset showed the best performance across all evaluation factors. In particular, the RMSE, PSNR, CC, and UQI improved by approximately 5.35×, 1.51×, 1.12×, and 1.01×, respectively, and the U-Net model with the residual map-based dataset was compared with the direct images. In conclusion, our simulation-based dataset demonstrates that U-Net models can be effectively trained for motion artifact reduction.
Collapse
Affiliation(s)
- Seong-Hyeon Kang
- Department of Biomedical Engineering, Eulji University, Seongnam 13135, Republic of Korea
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
11
|
Qu G, Lu B, Shi J, Wang Z, Yuan Y, Xia Y, Pan Z, Lin Y. Motion-artifact-augmented pseudo-label network for semi-supervised brain tumor segmentation. Phys Med Biol 2024; 69:055023. [PMID: 38406849 DOI: 10.1088/1361-6560/ad2634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 02/05/2024] [Indexed: 02/27/2024]
Abstract
MRI image segmentation is widely used in clinical practice as a prerequisite and a key for diagnosing brain tumors. The quest for an accurate automated segmentation method for brain tumor images, aiming to ease clinical doctors' workload, has gained significant attention as a research focal point. Despite the success of fully supervised methods in brain tumor segmentation, challenges remain. Due to the high cost involved in annotating medical images, the dataset available for training fully supervised methods is very limited. Additionally, medical images are prone to noise and motion artifacts, negatively impacting quality. In this work, we propose MAPSS, a motion-artifact-augmented pseudo-label network for semi-supervised segmentation. Our method combines motion artifact data augmentation with the pseudo-label semi-supervised training framework. We conduct several experiments under different semi-supervised settings on a publicly available dataset BraTS2020 for brain tumor segmentation. The experimental results show that MAPSS achieves accurate brain tumor segmentation with only a small amount of labeled data and maintains robustness in motion-artifact-influenced images. We also assess the generalization performance of MAPSS using the Left Atrium dataset. Our algorithm is of great significance for assisting doctors in formulating treatment plans and improving treatment quality.
Collapse
Affiliation(s)
- Guangcan Qu
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Beichen Lu
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Jialin Shi
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Ziyi Wang
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Yaping Yuan
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Yifan Xia
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Zhifang Pan
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| | - Yezhi Lin
- School of the 1st Clinical Medical Sciences (School of Information and Engineering), Wenzhou Medical University, Wenzhou 325000, People's Republic of China
| |
Collapse
|
12
|
Zhou Z, Hu P, Qi H. Stop moving: MR motion correction as an opportunity for artificial intelligence. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-023-01144-5. [PMID: 38386151 DOI: 10.1007/s10334-023-01144-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 02/23/2024]
Abstract
Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction.
Collapse
Affiliation(s)
- Zijian Zhou
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China
| | - Peng Hu
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| | - Haikun Qi
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| |
Collapse
|
13
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
14
|
Jun Y, Cho J, Wang X, Gee M, Grant PE, Bilgic B, Gagoski B. SSL-QALAS: Self-Supervised Learning for rapid multiparameter estimation in quantitative MRI using 3D-QALAS. Magn Reson Med 2023; 90:2019-2032. [PMID: 37415389 PMCID: PMC10527557 DOI: 10.1002/mrm.29786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/27/2023] [Accepted: 06/15/2023] [Indexed: 07/08/2023]
Abstract
PURPOSE To develop and evaluate a method for rapid estimation of multiparametric T1 , T2 , proton density, and inversion efficiency maps from 3D-quantification using an interleaved Look-Locker acquisition sequence with T2 preparation pulse (3D-QALAS) measurements using self-supervised learning (SSL) without the need for an external dictionary. METHODS An SSL-based QALAS mapping method (SSL-QALAS) was developed for rapid and dictionary-free estimation of multiparametric maps from 3D-QALAS measurements. The accuracy of the reconstructed quantitative maps using dictionary matching and SSL-QALAS was evaluated by comparing the estimated T1 and T2 values with those obtained from the reference methods on an International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. The SSL-QALAS and the dictionary-matching methods were also compared in vivo, and generalizability was evaluated by comparing the scan-specific, pre-trained, and transfer learning models. RESULTS Phantom experiments showed that both the dictionary-matching and SSL-QALAS methods produced T1 and T2 estimates that had a strong linear agreement with the reference values in the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. Further, SSL-QALAS showed similar performance with dictionary matching in reconstructing the T1 , T2 , proton density, and inversion efficiency maps on in vivo data. Rapid reconstruction of multiparametric maps was enabled by inferring the data using a pre-trained SSL-QALAS model within 10 s. Fast scan-specific tuning was also demonstrated by fine-tuning the pre-trained model with the target subject's data within 15 min. CONCLUSION The proposed SSL-QALAS method enabled rapid reconstruction of multiparametric maps from 3D-QALAS measurements without an external dictionary or labeled ground-truth training data.
Collapse
Affiliation(s)
- Yohan Jun
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Jaejin Cho
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Xiaoqing Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Michael Gee
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - P. Ellen Grant
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Harvard/MIT Health Sciences and Technology, Cambridge, MA, United States
| | - Borjan Gagoski
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Fetal-Neonatal Neuroimaging & Developmental Science Center, Boston Children’s Hospital, Boston, MA, United States
| |
Collapse
|
15
|
Simkó A, Ruiter S, Löfstedt T, Garpebring A, Nyholm T, Bylund M, Jonsson J. Improving MR image quality with a multi-task model, using convolutional losses. BMC Med Imaging 2023; 23:148. [PMID: 37784039 PMCID: PMC10544274 DOI: 10.1186/s12880-023-01109-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023] Open
Abstract
PURPOSE During the acquisition of MRI data, patient-, sequence-, or hardware-related factors can introduce artefacts that degrade image quality. Four of the most significant tasks for improving MRI image quality have been bias field correction, super-resolution, motion-, and noise correction. Machine learning has achieved outstanding results in improving MR image quality for these tasks individually, yet multi-task methods are rarely explored. METHODS In this study, we developed a model to simultaneously correct for all four aforementioned artefacts using multi-task learning. Two different datasets were collected, one consisting of brain scans while the other pelvic scans, which were used to train separate models, implementing their corresponding artefact augmentations. Additionally, we explored a novel loss function that does not only aim to reconstruct the individual pixel values, but also the image gradients, to produce sharper, more realistic results. The difference between the evaluated methods was tested for significance using a Friedman test of equivalence followed by a Nemenyi post-hoc test. RESULTS Our proposed model generally outperformed other commonly-used correction methods for individual artefacts, consistently achieving equal or superior results in at least one of the evaluation metrics. For images with multiple simultaneous artefacts, we show that the performance of using a combination of models, trained to correct individual artefacts depends heavily on the order that they were applied. This is not an issue for our proposed multi-task model. The model trained using our novel convolutional loss function always outperformed the model trained with a mean squared error loss, when evaluated using Visual Information Fidelity, a quality metric connected to perceptual quality. CONCLUSION We trained two models for multi-task MRI artefact correction of brain, and pelvic scans. We used a novel loss function that significantly improves the image quality of the outputs over using mean squared error. The approach performs well on real world data, and it provides insight into which artefacts it detects and corrects for. Our proposed model and source code were made publicly available.
Collapse
Affiliation(s)
- Attila Simkó
- Department of Radiation Sciences, Umeå University, Umeå, Sweden.
| | - Simone Ruiter
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Tommy Löfstedt
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Mikael Bylund
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| |
Collapse
|
16
|
Wu B, Li C, Zhang J, Lai H, Feng Q, Huang M. Unsupervised dual-domain disentangled network for removal of rigid motion artifacts in MRI. Comput Biol Med 2023; 165:107373. [PMID: 37611424 DOI: 10.1016/j.compbiomed.2023.107373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.
Collapse
Affiliation(s)
- Boya Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Caixia Li
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
17
|
Leming MJ, Bron EE, Bruffaerts R, Ou Y, Iglesias JE, Gollub RL, Im H. Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting. NPJ Digit Med 2023; 6:129. [PMID: 37443276 DOI: 10.1038/s41746-023-00868-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
Advances in artificial intelligence have cultivated a strong interest in developing and validating the clinical utilities of computer-aided diagnostic models. Machine learning for diagnostic neuroimaging has often been applied to detect psychological and neurological disorders, typically on small-scale datasets or data collected in a research setting. With the collection and collation of an ever-growing number of public datasets that researchers can freely access, much work has been done in adapting machine learning models to classify these neuroimages by diseases such as Alzheimer's, ADHD, autism, bipolar disorder, and so on. These studies often come with the promise of being implemented clinically, but despite intense interest in this topic in the laboratory, limited progress has been made in clinical implementation. In this review, we analyze challenges specific to the clinical implementation of diagnostic AI models for neuroimaging data, looking at the differences between laboratory and clinical settings, the inherent limitations of diagnostic AI, and the different incentives and skill sets between research institutions, technology companies, and hospitals. These complexities need to be recognized in the translation of diagnostic AI for neuroimaging from the laboratory to the clinic.
Collapse
Affiliation(s)
- Matthew J Leming
- Center for Systems Biology, Massachusetts General Hospital, Boston, MA, USA.
- Massachusetts Alzheimer's Disease Research Center, Charlestown, MA, USA.
| | - Esther E Bron
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Rose Bruffaerts
- Computational Neurology, Experimental Neurobiology Unit (ENU), Department of Biomedical Sciences, University of Antwerp, Antwerp, Belgium
- Biomedical Research Institute, Hasselt University, Diepenbeek, Belgium
| | - Yangming Ou
- Boston Children's Hospital, 300 Longwood Ave, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Center for Medical Image Computing, University College London, London, UK
- Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Randy L Gollub
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hyungsoon Im
- Center for Systems Biology, Massachusetts General Hospital, Boston, MA, USA.
- Massachusetts Alzheimer's Disease Research Center, Charlestown, MA, USA.
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
18
|
Usui K, Muro I, Shibukawa S, Goto M, Ogawa K, Sakano Y, Kyogoku S, Daida H. Evaluation of motion artefact reduction depending on the artefacts' directions in head MRI using conditional generative adversarial networks. Sci Rep 2023; 13:8526. [PMID: 37237139 DOI: 10.1038/s41598-023-35794-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 05/24/2023] [Indexed: 05/28/2023] Open
Abstract
Motion artefacts caused by the patient's body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training dataset consisted of motion artefacts generated through simulations. Motion artefacts occur in the phase encoding direction, which is set to either the horizontal or vertical direction of the image. To create T2-weighted axial images with simulated motion artefacts, 5500 head images were used in each direction. Of these data, 90% were used for training, while the remainder were used for the evaluation of image quality. Moreover, the validation data used in the model training consisted of 10% of the training dataset. The training data were divided into horizontal and vertical directions of motion artefact appearance, and the effect of combining this data with the training dataset was verified. The resulting corrected images were evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR), and the metrics were compared with the images without motion artefacts. The best improvements in the SSIM and PSNR were observed in the consistent condition in the direction of the occurrence of motion artefacts in the training and evaluation datasets. However, SSIM > 0.9 and PSNR > 29 dB were accomplished for the learning model with both image directions. The latter model exhibited the highest robustness for actual patient motion in head MRI images. Moreover, the image quality of the corrected image with the CGAN was the closest to that of the original image, while the improvement rates for SSIM and PSNR were approximately 26% and 7.7%, respectively. The CGAN model demonstrated a high image reproducibility, and the most significant model was the consistent condition of the learning model and the direction of the appearance of motion artefacts.
Collapse
Affiliation(s)
- Keisuke Usui
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan.
| | - Isao Muro
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Syuhei Shibukawa
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Masami Goto
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Koichi Ogawa
- Faculty of Science and Engineering, Hosei University, Tokyo, Japan
| | - Yasuaki Sakano
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Shinsuke Kyogoku
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Hiroyuki Daida
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| |
Collapse
|
19
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
20
|
Elyounssi S, Kunitoki K, Clauss JA, Laurent E, Kane K, Hughes DE, Hopkinson CE, Bazer O, Sussman RF, Doyle AE, Lee H, Tervo-Clemmens B, Eryilmaz H, Gollub RL, Barch DM, Satterthwaite TD, Dowling KF, Roffman JL. Uncovering and mitigating bias in large, automated MRI analyses of brain development. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.28.530498. [PMID: 36909456 PMCID: PMC10002762 DOI: 10.1101/2023.02.28.530498] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Large, population-based MRI studies of adolescents promise transformational insights into neurodevelopment and mental illness risk 1,2. However, MRI studies of youth are especially susceptible to motion and other artifacts 3,4. These artifacts may go undetected by automated quality control (QC) methods that are preferred in high-throughput imaging studies, 5 and can potentially introduce non-random noise into clinical association analyses. Here we demonstrate bias in structural MRI analyses of children due to inclusion of lower quality images, as identified through rigorous visual quality control of 11,263 T1 MRI scans obtained at age 9-10 through the Adolescent Brain Cognitive Development (ABCD) Study6. Compared to the best-rated images (44.9% of the sample), lower-quality images generally associated with decreased cortical thickness and increased cortical surface area measures (Cohen's d 0.14-2.84). Variable image quality led to counterintuitive patterns in analyses that associated structural MRI and clinical measures, as inclusion of lower-quality scans altered apparent effect sizes in ways that increased risk for both false positives and negatives. Quality-related biases were partially mitigated by controlling for surface hole number, an automated index of topological complexity that differentiated lower-quality scans with good specificity at Baseline (0.81-0.93) and in 1,000 Year 2 scans (0.88-1.00). However, even among the highest-rated images, subtle topological errors occurred during image preprocessing, and their correction through manual edits significantly and reproducibly changed thickness measurements across much of the cortex (d 0.15-0.92). These findings demonstrate that inadequate QC of youth structural MRI scans can undermine advantages of large sample size to detect meaningful associations.
Collapse
Affiliation(s)
- Safia Elyounssi
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Keiko Kunitoki
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Jacqueline A. Clauss
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Eline Laurent
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Kristina Kane
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Dylan E. Hughes
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
- Departments of Psychiatry & Biobehavioral Sciences, University of California, Los Angeles
| | - Casey E. Hopkinson
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Oren Bazer
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Rachel Freed Sussman
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Alysa E. Doyle
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Center for Genomic Medicine, Massachusetts General Hospital
| | - Hang Lee
- Biostatistics Center, Massachusetts General Hospital and Harvard Medical School
| | | | - Hamdi Eryilmaz
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Randy L. Gollub
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Deanna M. Barch
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | - Theodore D. Satterthwaite
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine
- Penn Lifespan and Neuroimaging Center, University of Pennsylvania Perelman School of Medicine
- Penn-CHOP Lifespan Brain Institute
| | - Kevin F. Dowling
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Department of Psychiatry, University of Pittsburgh
| | - Joshua L. Roffman
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| |
Collapse
|
21
|
Chen Z, Pawar K, Ekanayake M, Pain C, Zhong S, Egan GF. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J Digit Imaging 2023; 36:204-230. [PMID: 36323914 PMCID: PMC9984670 DOI: 10.1007/s10278-022-00721-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/09/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast for clinical diagnoses and research which underpin many recent breakthroughs in medicine and biology. The post-processing of reconstructed MR images is often automated for incorporation into MRI scanners by the manufacturers and increasingly plays a critical role in the final image quality for clinical reporting and interpretation. For image enhancement and correction, the post-processing steps include noise reduction, image artefact correction, and image resolution improvements. With the recent success of deep learning in many research fields, there is great potential to apply deep learning for MR image enhancement, and recent publications have demonstrated promising results. Motivated by the rapidly growing literature in this area, in this review paper, we provide a comprehensive overview of deep learning-based methods for post-processing MR images to enhance image quality and correct image artefacts. We aim to provide researchers in MRI or other research fields, including computer vision and image processing, a literature survey of deep learning approaches for MR image enhancement. We discuss the current limitations of the application of artificial intelligence in MRI and highlight possible directions for future developments. In the era of deep learning, we highlight the importance of a critical appraisal of the explanatory information provided and the generalizability of deep learning algorithms in medical imaging.
Collapse
Affiliation(s)
- Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia.
- Department of Data Science and AI, Monash University, Melbourne, VIC, Australia.
| | - Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
| | - Mevan Ekanayake
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Cameron Pain
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- National Imaging Facility, Brisbane, QLD, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
22
|
Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH. A knowledge interaction learning for multi-echo MRI motion artifact correction towards better enhancement of SWI. Comput Biol Med 2023; 153:106553. [PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/01/2023] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul, 05006, Republic of Korea
| | - Seul Lee
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | - Young Hun Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
23
|
Solomon O, Patriat R, Braun H, Palnitkar TE, Moeller S, Auerbach EJ, Ugurbil K, Sapiro G, Harel N. Motion robust magnetic resonance imaging via efficient Fourier aggregation. Med Image Anal 2023; 83:102638. [PMID: 36257133 DOI: 10.1016/j.media.2022.102638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
We present a method for suppressing motion artifacts in anatomical magnetic resonance acquisitions. Our proposed technique, termed MOTOR-MRI, can recover and salvage images which are otherwise heavily corrupted by motion induced artifacts and blur which renders them unusable. Contrary to other techniques, MOTOR-MRI operates on the reconstructed images and not on k-space data. It relies on breaking the standard acquisition protocol into several shorter ones (while maintaining the same total acquisition time) and subsequent efficient aggregation in Fourier space of locally sharp and consistent information among them, producing a sharp and motion mitigated image. We demonstrate the efficacy of the technique on T2-weighted turbo spin echo magnetic resonance brain scans with severe motion corruption from both 3 T and 7 T scanners and show significant qualitative and quantitative improvement in image quality. MOTOR-MRI can operate independently, or in conjunction with additional motion correction methods.
Collapse
Affiliation(s)
- Oren Solomon
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America.
| | - Rémi Patriat
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Henry Braun
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Tara E Palnitkar
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Steen Moeller
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Edward J Auerbach
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Kamil Ugurbil
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America
| | - Guillermo Sapiro
- Department of Electrical and Computer Engineering, Duke University, NC, United States of America; Department of Biomedical Engineering, Duke University, NC, United States of America; Department of Computer Science, Duke University, NC, United States of America; Department of Mathematics, Duke University, NC, United States of America
| | - Noam Harel
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States of America; Department of Neurosurgery, University of Minnesota, Minneapolis, MN, United States of America
| |
Collapse
|
24
|
Deep learning reconstruction in pediatric brain MRI: comparison of image quality with conventional T2-weighted MRI. Neuroradiology 2023; 65:207-214. [PMID: 36156109 DOI: 10.1007/s00234-022-03053-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 09/09/2022] [Indexed: 01/10/2023]
Abstract
INTRODUCTION Deep learning-based MRI reconstruction has recently been introduced to improve image quality. This study aimed to evaluate the performance of deep learning reconstruction in pediatric brain MRI. METHODS A total of 107 consecutive children who underwent 3.0 T brain MRI were included in this study. T2-weighted brain MRI was reconstructed using the three different reconstruction modes: deep learning reconstruction, conventional reconstruction with an intensity filter, and original T2 image without a filter. Two pediatric radiologists independently evaluated the following image quality parameters of three reconstructed images on a 5-point scale: overall image quality, image noisiness, sharpness of gray-white matter differentiation, truncation artifact, motion artifact, cerebrospinal fluid and vascular pulsation artifacts, and lesion conspicuity. The subjective image quality parameters were compared among the three reconstruction modes. Quantitative analysis of the signal uniformity using the coefficient of variation was performed for each reconstruction. RESULTS The overall image quality, noisiness, and gray-white matter sharpness were significantly better with deep learning reconstruction than with conventional or original reconstruction (all P < 0.001). Deep learning reconstruction had significantly fewer truncation artifacts than the other two reconstructions (all P < 0.001). Motion and pulsation artifacts showed no significant differences among the three reconstruction modes. For 36 lesions in 107 patients, lesion conspicuity was better with deep learning reconstruction than original reconstruction. Deep learning reconstruction showed lower signal variation compared to conventional and original reconstructions. CONCLUSION Deep learning reconstruction can reduce noise and truncation artifacts and improve lesion conspicuity and overall image quality in pediatric T2-weighted brain MRI.
Collapse
|
25
|
Wang NC, Noll DC, Srinivasan A, Gagnon-Bartsch J, Kim MM, Rao A. Simulated MRI Artifacts: Testing Machine Learning Failure Modes. BME FRONTIERS 2022; 2022:9807590. [PMID: 37850164 PMCID: PMC10521705 DOI: 10.34133/2022/9807590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 09/08/2022] [Indexed: 10/19/2023] Open
Abstract
Objective. Seven types of MRI artifacts, including acquisition and preprocessing errors, were simulated to test a machine learning brain tumor segmentation model for potential failure modes. Introduction. Real-world medical deployments of machine learning algorithms are less common than the number of medical research papers using machine learning. Part of the gap between the performance of models in research and deployment comes from a lack of hard test cases in the data used to train a model. Methods. These failure modes were simulated for a pretrained brain tumor segmentation model that utilizes standard MRI and used to evaluate the performance of the model under duress. These simulated MRI artifacts consisted of motion, susceptibility induced signal loss, aliasing, field inhomogeneity, sequence mislabeling, sequence misalignment, and skull stripping failures. Results. The artifact with the largest effect was the simplest, sequence mislabeling, though motion, field inhomogeneity, and sequence misalignment also caused significant performance decreases. The model was most susceptible to artifacts affecting the FLAIR (fluid attenuation inversion recovery) sequence. Conclusion. Overall, these simulated artifacts could be used to test other brain MRI models, but this approach could be used across medical imaging applications.
Collapse
Affiliation(s)
- Nicholas C. Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, USA
| | - Douglas C. Noll
- Department of Biomedical Engineering, University of Michigan, USA
- Department of Radiology, University of Michigan, USA
| | - Ashok Srinivasan
- Department of Radiology, Division of Neuroradiology, University of Michigan, USA
- Rogel Cancer Center, University of Michigan, USA
- Frankel Cardiovascular Center, University of Michigan, USA
| | | | - Michelle M. Kim
- Department of Radiation Oncology, University of Michigan, USA
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, USA
- Department of Radiation Oncology, University of Michigan, USA
| |
Collapse
|
26
|
Jang J, Chung YE, Kim S, Hwang D. Fully automatic quantification of transient severe respiratory motion artifact of gadoxetate disodium-enhanced MRI during arterial phase. Med Phys 2022; 49:7247-7261. [PMID: 35754384 DOI: 10.1002/mp.15831] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 05/16/2022] [Accepted: 06/09/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE It is important to fully automate the evaluation of gadoxetate disodium-enhanced arterial phase images because the efficient quantification of transient severe motion artifacts can be used in a variety of applications. Our study proposes a fully automatic evaluation method of motion artifacts during the arterial phase of gadoxetate disodium-enhanced MR imaging. METHODS The proposed method was based on the construction of quality-aware features to represent the motion artifact using MR image statistics and multidirectional filtered coefficients. Using the quality-aware features, the method calculated quantitative quality scores of gadoxetate disodium-enhanced images fully automatically. The performance of our proposed method, as well as two other methods, was acquired by correlating scores against subjective scores from radiologists based on the 5-point scale and binary evaluation. The subjective scores evaluated by two radiologists were severity scores of motion artifacts in the evaluation set on a scale of 1 (no motion artifacts) to 5 (severe motion artifacts). RESULTS Pearson's linear correlation coefficient (PLCC) and Spearman's rank-ordered correlation coefficient (SROCC) values of our proposed method against the subjective scores were 0.9036 and 0.9057, respectively, whereas the PLCC values of two other methods were 0.6525 and 0.8243, and the SROCC values were 0.6070 and 0.8348. Also, in terms of binary quantification of transient severe respiratory motion, the proposed method achieved 0.9310 sensitivity, 0.9048 specificity, and 0.9200 accuracy, whereas the other two methods achieved 0.7586, 0.8996 sensitivities, 0.8098, 0.8905 specificities, and 0.9200, 0.9048 accuracies CONCLUSIONS: This study demonstrated the high performance of the proposed automatic quantification method in evaluating transient severe motion artifacts in arterial phase images.
Collapse
Affiliation(s)
- Jinseong Jang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yong Eun Chung
- Department of Radiology, Yonsei University College of Medicine, Yonsei University, Seoul, Republic of Korea
| | - Sungwon Kim
- Department of Radiology, Yonsei University College of Medicine, Yonsei University, Seoul, Republic of Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.,Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Republic of Korea.,Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea.,Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
27
|
Nárai Á, Hermann P, Auer T, Kemenczky P, Szalma J, Homolya I, Somogyi E, Vakli P, Weiss B, Vidnyánszky Z. Movement-related artefacts (MR-ART) dataset of matched motion-corrupted and clean structural MRI brain scans. Sci Data 2022; 9:630. [PMID: 36253426 PMCID: PMC9576686 DOI: 10.1038/s41597-022-01694-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/12/2022] [Indexed: 11/10/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) provides a unique opportunity to investigate neural changes in healthy and clinical conditions. Its large inherent susceptibility to motion, however, often confounds the measurement. Approaches assessing, correcting, or preventing motion corruption of MRI measurements are under active development, and such efforts can greatly benefit from carefully controlled datasets. We present a unique dataset of structural brain MRI images collected from 148 healthy adults which includes both motion-free and motion-affected data acquired from the same participants. This matched dataset allows direct evaluation of motion artefacts, their impact on derived data, and testing approaches to correct for them. Our dataset further stands out by containing images with different levels of motion artefacts from the same participants, is enriched with expert scoring characterizing the image quality from a clinical point of view and is also complemented with standard image quality metrics obtained from MRIQC. The goal of the dataset is to raise awareness of the issue and provide a useful resource to assess and improve current motion correction approaches.
Collapse
Affiliation(s)
- Ádám Nárai
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary.
| | - Petra Hermann
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - Tibor Auer
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary.,School of Psychology, University of Surrey, Guildford, United Kingdom
| | - Péter Kemenczky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - János Szalma
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - István Homolya
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - Eszter Somogyi
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - Pál Vakli
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - Béla Weiss
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, 1117, Hungary.
| |
Collapse
|
28
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
29
|
Tang H, Guo L, Fu X, Qu B, Ajilore O, Wang Y, Thompson PM, Huang H, Leow AD, Zhan L. A Hierarchical Graph Learning Model for Brain Network Regression Analysis. Front Neurosci 2022; 16:963082. [PMID: 35903810 PMCID: PMC9315240 DOI: 10.3389/fnins.2022.963082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 06/22/2022] [Indexed: 11/29/2022] Open
Abstract
Brain networks have attracted increasing attention due to the potential to better characterize brain dynamics and abnormalities in neurological and psychiatric conditions. Recent years have witnessed enormous successes in deep learning. Many AI algorithms, especially graph learning methods, have been proposed to analyze brain networks. An important issue for existing graph learning methods is that those models are not typically easy to interpret. In this study, we proposed an interpretable graph learning model for brain network regression analysis. We applied this new framework on the subjects from Human Connectome Project (HCP) for predicting multiple Adult Self-Report (ASR) scores. We also use one of the ASR scores as the example to demonstrate how to identify sex differences in the regression process using our model. In comparison with other state-of-the-art methods, our results clearly demonstrate the superiority of our new model in effectiveness, fairness, and transparency.
Collapse
Affiliation(s)
- Haoteng Tang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Lei Guo
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Xiyao Fu
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Benjamin Qu
- Mission San Jose High School, Fremont, CA, United States
| | - Olusola Ajilore
- Department of Psychiatry, University of Illinois Chicago, Chicago, IL, United States
| | - Yalin Wang
- Department of Computer Science and Engineering, Arizona State University, Tempe, AZ, United States
| | - Paul M. Thompson
- Imaging Genetics Center, University of Southern California, Los Angeles, CA, United States
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Alex D. Leow
- Department of Psychiatry, University of Illinois Chicago, Chicago, IL, United States
| | - Liang Zhan
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
- *Correspondence: Liang Zhan
| |
Collapse
|
30
|
Stacked U-Nets with self-assisted priors towards robust correction of rigid motion artifact in brain MRI. Neuroimage 2022; 259:119411. [PMID: 35753594 DOI: 10.1016/j.neuroimage.2022.119411] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/12/2022] [Accepted: 06/22/2022] [Indexed: 11/23/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) is sensitive to motion caused by patient movement due to the relatively long data acquisition time. This could cause severe degradation of image quality and therefore affect the overall diagnosis. In this paper, we develop an efficient retrospective 2D deep learning method called stacked U-Nets with self-assisted priors to address the problem of rigid motion artifacts in 3D brain MRI. The proposed work exploits the usage of additional knowledge priors from the corrupted images themselves without the need for additional contrast data. The proposed network learns the missed structural details through sharing auxiliary information from the contiguous slices of the same distorted subject. We further design a refinement stacked U-Nets that facilitates preserving the spatial image details and improves the pixel-to-pixel dependency. To perform network training, simulation of MRI motion artifacts is inevitable. The proposed network is optimized by minimizing the loss of structural similarity (SSIM) using the synthesized motion-corrupted images from 83 real motion-free subjects. We present an intensive analysis using various types of image priors: the proposed self-assisted priors and priors from other image contrast of the same subject. The experimental analysis proves the effectiveness and feasibility of our self-assisted priors since it does not require any further data scans. The overall image quality of the motion-corrected images via the proposed motion correction network significantly improves SSIM from 71.66% to 95.03% and declines the mean square error from 99.25 to 29.76. These results indicate the high similarity of the brain's anatomical structure in the corrected images compared to the motion-free data. The motion-corrected results of both the simulated and real motion data showed the potential of the proposed motion correction network to be feasible and applicable in clinical practices.
Collapse
|
31
|
Singh NM, Iglesias JE, Adalsteinsson E, Dalca AV, Golland P. Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis. THE JOURNAL OF MACHINE LEARNING FOR BIOMEDICAL IMAGING 2022; 2022:018. [PMID: 36349348 PMCID: PMC9639401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at https://github.com/nalinimsingh/interlacer.
Collapse
Affiliation(s)
- Nalini M Singh
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Health Sciences & Technology, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Centre for Medical Image Computing, UCL, London, UK
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Elfar Adalsteinsson
- Research Laboratory of Electronics, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| | - Adrian V Dalca
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| |
Collapse
|
32
|
Pirkl CM, Cencini M, Kurzawski JW, Waldmannstetter D, Li H, Sekuboyina A, Endt S, Peretti L, Donatelli G, Pasquariello R, Costagli M, Buonincontri G, Tosetti M, Menzel MI, Menze BH. Learning residual motion correction for fast and robust 3D multiparametric MRI. Med Image Anal 2022; 77:102387. [DOI: 10.1016/j.media.2022.102387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/25/2021] [Accepted: 02/01/2022] [Indexed: 11/28/2022]
|
33
|
What's New and What's Next in Diffusion MRI Preprocessing. Neuroimage 2021; 249:118830. [PMID: 34965454 PMCID: PMC9379864 DOI: 10.1016/j.neuroimage.2021.118830] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 10/26/2021] [Accepted: 12/15/2021] [Indexed: 02/07/2023] Open
Abstract
Diffusion MRI (dMRI) provides invaluable information for the study of tissue microstructure and brain connectivity, but suffers from a range of imaging artifacts that greatly challenge the analysis of results and their interpretability if not appropriately accounted for. This review will cover dMRI artifacts and preprocessing steps, some of which have not typically been considered in existing pipelines or reviews, or have only gained attention in recent years: brain/skull extraction, B-matrix incompatibilities w.r.t the imaging data, signal drift, Gibbs ringing, noise distribution bias, denoising, between- and within-volumes motion, eddy currents, outliers, susceptibility distortions, EPI Nyquist ghosts, gradient deviations, B1 bias fields, and spatial normalization. The focus will be on “what’s new” since the notable advances prior to and brought by the Human Connectome Project (HCP), as presented in the predecessing issue on “Mapping the Connectome” in 2013. In addition to the development of novel strategies for dMRI preprocessing, exciting progress has been made in the availability of open source tools and reproducible pipelines, databases and simulation tools for the evaluation of preprocessing steps, and automated quality control frameworks, amongst others. Finally, this review will consider practical considerations and our view on “what’s next” in dMRI preprocessing.
Collapse
|
34
|
Wang C, Li Y, Lv J, Jin J, Hu X, Kuang X, Chen W, Wang H. Recommendation for Cardiac Magnetic Resonance Imaging-Based Phenotypic Study: Imaging Part. PHENOMICS 2021; 1:151-170. [PMID: 35233561 PMCID: PMC8318053 DOI: 10.1007/s43657-021-00018-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 05/22/2021] [Accepted: 05/25/2021] [Indexed: 11/26/2022]
Abstract
Cardiac magnetic resonance (CMR) imaging provides important biomarkers for the early diagnosis of many cardiovascular diseases and has been reported to reveal phenome-wide associations of cardiac/aortic structure and functionality in population studies. Nevertheless, due to the complexity of operation and variations among manufactural vendors, magnetic field strengths, coils, sequences, scan parameters, and image analysis approaches, CMR is rarely used in large cohort studies. Existing guidelines mainly focused on the diagnosis of cardiovascular diseases, which did not aim to basic research. The purpose of this study was to propose a recommendation for CMR based phenotype measurements for cohort study. We classify the imaging sequences of CMR into three categories according to the importance and universality of corresponding measurable phenotypes. The acquisition time and repeatability of the phenotypic measurement were also taken into consideration during the categorization. Unlike other guidelines, this recommendation focused on quantitative measurement of large amount of phenotypes from CMR.
Collapse
Affiliation(s)
- Chengyan Wang
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Pudong New District, Shanghai, 201203 China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Jianhua Jin
- School of Data Science, Fudan University, Shanghai, China
| | - Xumei Hu
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Pudong New District, Shanghai, 201203 China
| | - Xutong Kuang
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Pudong New District, Shanghai, 201203 China
| | - Weibo Chen
- Philips Healthcare. Co., Shanghai, China
| | - He Wang
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Pudong New District, Shanghai, 201203 China
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, 220 Handan Road, Yangpu District, Shanghai, 200433 China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, China
| |
Collapse
|