1
|
Singh R, Singh N, Kaur L. Deep learning methods for 3D magnetic resonance image denoising, bias field and motion artifact correction: a comprehensive review. Phys Med Biol 2024; 69:23TR01. [PMID: 39569887 DOI: 10.1088/1361-6560/ad94c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 11/19/2024] [Indexed: 11/22/2024]
Abstract
Magnetic resonance imaging (MRI) provides detailed structural information of the internal body organs and soft tissue regions of a patient in clinical diagnosis for disease detection, localization, and progress monitoring. MRI scanner hardware manufacturers incorporate various post-acquisition image-processing techniques into the scanner's computer software tools for different post-processing tasks. These tools provide a final image of adequate quality and essential features for accurate clinical reporting and predictive interpretation for better treatment planning. Different post-acquisition image-processing tasks for MRI quality enhancement include noise removal, motion artifact reduction, magnetic bias field correction, and eddy electric current effect removal. Recently, deep learning (DL) methods have shown great success in many research fields, including image and video applications. DL-based data-driven feature-learning approaches have great potential for MR image denoising and image-quality-degrading artifact correction. Recent studies have demonstrated significant improvements in image-analysis tasks using DL-based convolutional neural network techniques. The promising capabilities and performance of DL techniques in various problem-solving domains have motivated researchers to adapt DL methods to medical image analysis and quality enhancement tasks. This paper presents a comprehensive review of DL-based state-of-the-art MRI quality enhancement and artifact removal methods for regenerating high-quality images while preserving essential anatomical and physiological feature maps without destroying important image information. Existing research gaps and future directions have also been provided by highlighting potential research areas for future developments, along with their importance and advantages in medical imaging.
Collapse
Affiliation(s)
- Ram Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Navdeep Singh
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| | - Lakhwinder Kaur
- Department of Computer Science & Engineering, Punjabi University, Chandigarh Road, Patiala 147002, Punjab, India
| |
Collapse
|
2
|
Gil N, Tabari A, Lo WC, Clifford B, Lang M, Awan K, Gaudet K, Splitthoff DN, Polak D, Cauley S, Huang SY. Quantitative evaluation of Scout Accelerated Motion Estimation and Reduction (SAMER) MPRAGE for morphometric analysis of brain tissue in patients undergoing evaluation for memory loss. Neuroimage 2024; 300:120865. [PMID: 39349147 PMCID: PMC11498920 DOI: 10.1016/j.neuroimage.2024.120865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 09/20/2024] [Accepted: 09/23/2024] [Indexed: 10/02/2024] Open
Abstract
BACKGROUND Three-dimensional (3D) T1-weighted MRI sequences such as the magnetization prepared rapid gradient echo (MPRAGE) sequence are important for assessing regional cortical atrophy in the clinical evaluation of dementia but have long acquisition times and are prone to motion artifact. The recently developed Scout Accelerated Motion Estimation and Reduction (SAMER) retrospective motion correction method addresses motion artifact within clinically-acceptable computation times and has been validated through qualitative evaluation in inpatient and emergency settings. METHODS We evaluated the quantitative accuracy of morphometric analysis of SAMER motion-corrected compared to non-motion-corrected MPRAGE images by estimating cortical volume and thickness across neuroanatomical regions in two subject groups: (1) healthy volunteers and (2) patients undergoing evaluation for dementia. In part (1), we used a set of 108 MPRAGE reconstructed images derived from 12 healthy volunteers to systematically assess the effectiveness of SAMER in correcting varying degrees of motion corruption, ranging from mild to severe. In part (2), 29 patients who were scheduled for brain MRI with memory loss protocol and had motion corruption on their clinical MPRAGE scans were prospectively enrolled. RESULTS In part (1), SAMER resulted in effective correction of motion-induced cortical volume and thickness reductions. We observed systematic increases in the estimated cortical volume and thickness across all neuroanatomical regions and a relative reduction in percent error values compared to reference standard scans of up to 66 % for the cerebral white matter volume. In part (2), SAMER resulted in statistically significant volume increases across anatomical regions, with the most pronounced increases seen in the parietal and temporal lobes, and general reductions in percent error relative to reference standard clinical scans. CONCLUSION SAMER improves the accuracy of morphometry through systematic increases and recovery of the estimated cortical volume and cortical thickness following motion correction, which may affect the evaluation of regional cortical atrophy in patients undergoing evaluation for dementia.
Collapse
Affiliation(s)
- Nelson Gil
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | | | | | - Min Lang
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Komal Awan
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Kyla Gaudet
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | | | | | - Stephen Cauley
- Harvard Medical School, Boston, MA, USA; Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Susie Y Huang
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| |
Collapse
|
3
|
Hewlett M, Petrov I, Johnson PM, Drangova M. Deep-learning-based motion correction using multichannel MRI data: a study using simulated artifacts in the fastMRI dataset. NMR IN BIOMEDICINE 2024; 37:e5179. [PMID: 38808752 DOI: 10.1002/nbm.5179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 04/21/2024] [Accepted: 04/29/2024] [Indexed: 05/30/2024]
Abstract
Deep learning presents a generalizable solution for motion correction requiring no pulse sequence modifications or additional hardware, but previous networks have all been applied to coil-combined data. Multichannel MRI data provide a degree of spatial encoding that may be useful for motion correction. We hypothesize that incorporating deep learning for motion correction prior to coil combination will improve results. A conditional generative adversarial network was trained using simulated rigid motion artifacts in brain images acquired at multiple sites with multiple contrasts (not limited to healthy subjects). We compared the performance of deep-learning-based motion correction on individual channel images (single-channel model) with that performed after coil combination (channel-combined model). We also investigate simultaneous motion correction of all channel data from an image volume (multichannel model). The single-channel model significantly (p < 0.0001) improved mean absolute error, with an average 50.9% improvement compared with the uncorrected images. This was significantly (p < 0.0001) better than the 36.3% improvement achieved by the channel-combined model (conventional approach). The multichannel model provided no significant improvement in quantitative measures of image quality compared with the uncorrected images. Results were independent of the presence of pathology, and generalizable to a new center unseen during training. Performing motion correction on single-channel images prior to coil combination provided an improvement in performance compared with conventional deep-learning-based motion correction. Improved deep learning methods for retrospective correction of motion-affected MR images could reduce the need for repeat scans if applied in a clinical setting.
Collapse
Affiliation(s)
- Miriam Hewlett
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| | - Ivailo Petrov
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| | - Patricia M Johnson
- Department of Radiology, New York Medicine School of Medicine, New York, New York, USA
| | - Maria Drangova
- Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
- Department of Medical Biophysics, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
4
|
Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Phys Med Biol 2024; 69:115057. [PMID: 38714192 DOI: 10.1088/1361-6560/ad4845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/07/2024] [Indexed: 05/09/2024]
Abstract
Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, United States of America
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, MS, United States of America
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| |
Collapse
|
5
|
Mio M, Tabata N, Toyofuku T, Nakamura H. [Reduction of Motion Artifacts in Liver MRI Using Deep Learning with High-pass Filtering]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:510-518. [PMID: 38462509 DOI: 10.6009/jjrt.2024-1408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
PURPOSE To investigate whether deep learning with high-pass filtering can be used to effectively reduce motion artifacts in magnetic resonance (MR) images of the liver. METHODS The subjects were 69 patients who underwent liver MR examination at our hospital. Simulated motion artifact images (SMAIs) were created from non-artifact images (NAIs) and used for deep learning. Structural similarity index measure (SSIM) and contrast ratio (CR) were used to verify the effect of reducing motion artifacts in motion artifact reduction image (MARI) output from the obtained deep learning model. In the visual assessment, reduction of motion artifacts and image sharpness were evaluated between motion artifact images (MAIs) and MARIs. RESULTS The SSIM values were 0.882 on the MARIs and 0.869 on the SMAIs. There was no statistically significant difference in CR between NAIs and MARIs. The visual assessment showed that MARIs had reduced motion artifacts and improved sharpness compared to MAIs. CONCLUSION The learning model in this study is indicated to be reduced motion artifacts without decreasing the sharpness of liver MR images.
Collapse
Affiliation(s)
- Motohira Mio
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Nariaki Tabata
- Department of Radiology, Fukuoka University Chikushi Hospital
| | - Tatsuo Toyofuku
- Department of Radiology, Fukuoka University Chikushi Hospital
| | | |
Collapse
|
6
|
Safari M, Yang X, Fatemi A, Archambault L. MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM). Med Phys 2024; 51:2598-2610. [PMID: 38009583 DOI: 10.1002/mp.16844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND High-resolution magnetic resonance imaging (MRI) with excellent soft-tissue contrast is a valuable tool utilized for diagnosis and prognosis. However, MRI sequences with long acquisition time are susceptible to motion artifacts, which can adversely affect the accuracy of post-processing algorithms. PURPOSE This study proposes a novel retrospective motion correction method named "motion artifact reduction using conditional diffusion probabilistic model" (MAR-CDPM). The MAR-CDPM aimed to remove motion artifacts from multicenter three-dimensional contrast-enhanced T1 magnetization-prepared rapid acquisition gradient echo (3D ceT1 MPRAGE) brain dataset with different brain tumor types. MATERIALS AND METHODS This study employed two publicly accessible MRI datasets: one containing 3D ceT1 MPRAGE and 2D T2-fluid attenuated inversion recovery (FLAIR) images from 230 patients with diverse brain tumors, and the other comprising 3D T1-weighted (T1W) MRI images of 148 healthy volunteers, which included real motion artifacts. The former was used to train and evaluate the model using the in silico data, and the latter was used to evaluate the model performance to remove real motion artifacts. A motion simulation was performed in k-space domain to generate an in silico dataset with minor, moderate, and heavy distortion levels. The diffusion process of the MAR-CDPM was then implemented in k-space to convert structure data into Gaussian noise by gradually increasing motion artifact levels. A conditional network with a Unet backbone was trained to reverse the diffusion process to convert the distorted images to structured data. The MAR-CDPM was trained in two scenarios: one conditioning on the time step t $t$ of the diffusion process, and the other conditioning on both t $t$ and T2-FLAIR images. The MAR-CDPM was quantitatively and qualitatively compared with supervised Unet, Unet conditioned on T2-FLAIR, CycleGAN, Pix2pix, and Pix2pix conditioned on T2-FLAIR models. To quantify the spatial distortions and the level of remaining motion artifacts after applying the models, quantitative metrics were reported including normalized mean squared error (NMSE), structural similarity index (SSIM), multiscale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multiscale gradient magnitude similarity deviation (MS-GMSD). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference between the models where p-value < 0.05 $ < 0.05$ was considered statistically significant. RESULTS Qualitatively, MAR-CDPM outperformed these methods in preserving soft-tissue contrast and different brain regions. It also successfully preserved tumor boundaries for heavy motion artifacts, like the supervised method. Our MAR-CDPM recovered motion-free in silico images with the highest PSNR and VIF for all distortion levels where the differences were statistically significant (p-values< 0.05 $< 0.05$ ). In addition, our method conditioned on t and T2-FLAIR outperformed (p-values< 0.05 $< 0.05$ ) other methods to remove motion artifacts from the in silico dataset in terms of NMSE, MS-SSIM, SSIM, and MS-GMSD. Moreover, our method conditioned on only t outperformed generative models (p-values< 0.05 $< 0.05$ ) and had comparable performances compared with the supervised model (p-values> 0.05 $> 0.05$ ) to remove real motion artifacts. CONCLUSIONS The MAR-CDPM could successfully remove motion artifacts from 3D ceT1 MPRAGE. It is particularly beneficial for elderly who may experience involuntary movements during high-resolution MRI imaging with long acquisition times.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, Mississippi, USA
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, Mississippi, USA
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| |
Collapse
|
7
|
Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JA. Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:846-859. [PMID: 37831582 DOI: 10.1109/tmi.2023.3323215] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
Collapse
|
8
|
Beddok A, Lim R, Thariat J, Shih HA, El Fakhri G. A Comprehensive Primer on Radiation Oncology for Non-Radiation Oncologists. Cancers (Basel) 2023; 15:4906. [PMID: 37894273 PMCID: PMC10605284 DOI: 10.3390/cancers15204906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/05/2023] [Accepted: 10/07/2023] [Indexed: 10/29/2023] Open
Abstract
Background: Multidisciplinary management is crucial in cancer diagnosis and treatment. Multidisciplinary teams include specialists in surgery, medical therapies, and radiation therapy (RT), each playing unique roles in oncology care. One significant aspect is RT, guided by radiation oncologists (ROs). This paper serves as a detailed primer for non-oncologists, medical students, or non-clinical investigators, educating them on contemporary RT practices. Methods: This report follows the process of RT planning and execution. Starting from the decision-making in multidisciplinary teams to the completion of RT and subsequent patient follow-up, it aims to offer non-oncologists an understanding of the RO's work in a comprehensive manner. Results: The first step in RT is a planning session that includes obtaining a CT scan of the area to be treated, known as the CT simulation. The patients are imaged in the exact position in which they will receive treatment. The second step, which is the primary source of uncertainty, involves the delineation of treatment targets and organs at risk (OAR). The objective is to ensure precise irradiation of the target volume while sparing the OARs as much as possible. Various radiation modalities, such as external beam therapy with electrons, photons, or particles (including protons and carbon ions), as well as brachytherapy, are utilized. Within these modalities, several techniques, such as three-dimensional conformal RT, intensity-modulated RT, volumetric modulated arc therapy, scattering beam proton therapy, and intensity-modulated proton therapy, are employed to achieve optimal treatment outcomes. The RT plan development is an iterative process involving medical physicists, dosimetrists, and ROs. The complexity and time required vary, ranging from an hour to a week. Once approved, RT begins, with image-guided RT being standard practice for patient alignment. The RO manages acute toxicities during treatment and prepares a summary upon completion. There is a considerable variance in practices, with some ROs offering lifelong follow-up and managing potential late effects of treatment. Conclusions: Comprehension of RT clinical effects by non-oncologists providers significantly elevates long-term patient care quality. Hence, educating non-oncologists enhances care for RT patients, underlining this report's importance.
Collapse
Affiliation(s)
- Arnaud Beddok
- Department of Radiation Oncology, Institut Godinot, 51100 Reims, France
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Ruth Lim
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Juliette Thariat
- Department of Radiation Oncology, Centre François-Baclesse, 14000 Caen, France
| | - Helen A. Shih
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
9
|
Pierre K, Haneberg AG, Kwak S, Peters KR, Hochhegger B, Sananmuang T, Tunlayadechanont P, Tighe PJ, Mancuso A, Forghani R. Applications of Artificial Intelligence in the Radiology Roundtrip: Process Streamlining, Workflow Optimization, and Beyond. Semin Roentgenol 2023; 58:158-169. [PMID: 37087136 DOI: 10.1053/j.ro.2023.02.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 02/14/2023] [Indexed: 04/24/2023]
Abstract
There are many impactful applications of artificial intelligence (AI) in the electronic radiology roundtrip and the patient's journey through the healthcare system that go beyond diagnostic applications. These tools have the potential to improve quality and safety, optimize workflow, increase efficiency, and increase patient satisfaction. In this article, we review the role of AI for process improvement and workflow enhancement which includes applications beginning from the time of order entry, scan acquisition, applications supporting the image interpretation task, and applications supporting tasks after image interpretation such as result communication. These non-diagnostic workflow and process optimization tasks are an important part of the arsenal of potential AI tools that can streamline day to day clinical practice and patient care.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Adam G Haneberg
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Sean Kwak
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL
| | - Keith R Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Padcha Tunlayadechanont
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Patrick J Tighe
- Departments of Anesthesiology & Orthopaedic Surgery, University of Florida College of Medicine, Gainesville, FL
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL.
| |
Collapse
|
10
|
Chen Z, Pawar K, Ekanayake M, Pain C, Zhong S, Egan GF. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J Digit Imaging 2023; 36:204-230. [PMID: 36323914 PMCID: PMC9984670 DOI: 10.1007/s10278-022-00721-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/09/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast for clinical diagnoses and research which underpin many recent breakthroughs in medicine and biology. The post-processing of reconstructed MR images is often automated for incorporation into MRI scanners by the manufacturers and increasingly plays a critical role in the final image quality for clinical reporting and interpretation. For image enhancement and correction, the post-processing steps include noise reduction, image artefact correction, and image resolution improvements. With the recent success of deep learning in many research fields, there is great potential to apply deep learning for MR image enhancement, and recent publications have demonstrated promising results. Motivated by the rapidly growing literature in this area, in this review paper, we provide a comprehensive overview of deep learning-based methods for post-processing MR images to enhance image quality and correct image artefacts. We aim to provide researchers in MRI or other research fields, including computer vision and image processing, a literature survey of deep learning approaches for MR image enhancement. We discuss the current limitations of the application of artificial intelligence in MRI and highlight possible directions for future developments. In the era of deep learning, we highlight the importance of a critical appraisal of the explanatory information provided and the generalizability of deep learning algorithms in medical imaging.
Collapse
Affiliation(s)
- Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia.
- Department of Data Science and AI, Monash University, Melbourne, VIC, Australia.
| | - Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
| | - Mevan Ekanayake
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Cameron Pain
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- National Imaging Facility, Brisbane, QLD, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
11
|
Yoshida N, Kageyama H, Akai H, Yasaka K, Sugawara H, Okada Y, Kunimatsu A. Motion correction in MR image for analysis of VSRAD using generative adversarial network. PLoS One 2022; 17:e0274576. [PMID: 36103561 PMCID: PMC9473426 DOI: 10.1371/journal.pone.0274576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 08/30/2022] [Indexed: 11/26/2022] Open
Abstract
Voxel-based specific region analysis systems for Alzheimer's disease (VSRAD) are clinically used to measure the atrophied hippocampus captured by magnetic resonance imaging (MRI). However, motion artifacts during acquisition of images may distort the results of the analysis. This study aims to evaluate the usefulness of the Pix2Pix network in motion correction for the input image of VSRAD analysis. Seventy-three patients examined with MRI were distinguished into the training group (n = 51) and the test group (n = 22). To create artifact images, the k-space images were manipulated. Supervised deep learning was employed to obtain a Pix2Pix that generates motion-corrected images, with artifact images as the input data and original images as the reference data. The results of the VSRAD analysis (severity of voxel of interest (VOI) atrophy, the extent of gray matter (GM) atrophy, and extent of VOI atrophy) were recorded for artifact images and motion-corrected images, and were then compared with the original images. For comparison, the image quality of Pix2Pix generated motion-corrected image was also compared with that of U-Net. The Bland-Altman analysis showed that the mean of the limits of agreement was smaller for the motion-corrected images compared to the artifact images, suggesting successful motion correction by the Pix2Pix. The Spearman's rank correlation coefficients between original and motion-corrected images were almost perfect for all results (severity of VOI atrophy: 0.87-0.99, extent of GM atrophy: 0.88-00.98, extent of VOI atrophy: 0.90-1.00). Pix2Pix generated motion-corrected images that showed generally improved quantitative and qualitative image qualities compared with the U-net generated motion-corrected images. Our findings suggest that motion correction using Pix2Pix is a useful method for VSRAD analysis.
Collapse
Affiliation(s)
- Nobukiyo Yoshida
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, Japan
- Division of Health Science, Graduate School of Health Science, Suzuka University of Medical Science, Suzuka-city, Mie, Japan
| | - Hajime Kageyama
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, Japan
| | - Hiroyuki Akai
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Haruto Sugawara
- Department of Radiology, Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, Japan
| | - Yukinori Okada
- Division of Health Science, Graduate School of Health Science, Suzuka University of Medical Science, Suzuka-city, Mie, Japan
- Department of Radiology, Tokyo Medical University, Shinjuku-ku, Tokyo, Japan
| | - Akira Kunimatsu
- Department of Radiology, Mita Hospital, International University of Health and Welfare, Minato-ku, Tokyo, Japan
| |
Collapse
|