1
|
Chatterjee S, Sciarra A, Dünnwald M, Ashoka ABT, Vasudeva MGC, Saravanan S, Sambandham VT, Tummala P, Oeltze-Jafra S, Speck O, Nürnberger A. Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution. J Imaging 2024; 10:207. [PMID: 39330427 PMCID: PMC11433164 DOI: 10.3390/jimaging10090207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 08/16/2024] [Accepted: 08/18/2024] [Indexed: 09/28/2024] Open
Abstract
High-spatial resolution MRI produces abundant structural information, enabling highly accurate clinical diagnosis and image-guided therapeutics. However, the acquisition of high-spatial resolution MRI data typically can come at the expense of less spatial coverage, lower signal-to-noise ratio (SNR), and longer scan time due to physical, physiological and hardware limitations. In order to overcome these limitations, super-resolution MRI deep-learning-based techniques can be utilised. In this work, different state-of-the-art 3D convolution neural network models for super resolution (RRDB, SPSR, UNet, UNet-MSS and ShuffleUNet) were compared for the super-resolution task with the goal of finding the best model in terms of performance and robustness. The public IXI dataset (only structural images) was used. Data were artificially downsampled to obtain lower-resolution spatial MRIs (downsampling factor varying from 8 to 64). When assessing performance using the SSIM metric in the test set, all models performed well. In particular, regardless of the downsampling factor, the UNet consistently obtained the top results. On the other hand, the SPSR model consistently performed worse. In conclusion, UNet and UNet-MSS achieved overall top performances while RRDB performed relatively poorly compared to the other models.
Collapse
Affiliation(s)
- Soumick Chatterjee
- Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
- Genomics Research Centre, Human Technopole, 20157 Milan, Italy
| | - Alessandro Sciarra
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (A.S.); (O.S.)
- MedDigit, Department of Neurology, Medical Faculty, University Hospital Magdeburg, 39120 Magdeburg, Germany;
| | - Max Dünnwald
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
- MedDigit, Department of Neurology, Medical Faculty, University Hospital Magdeburg, 39120 Magdeburg, Germany;
| | - Anitha Bhat Talagini Ashoka
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
- Fraunhofer Institute for Digital Media Technology, 98693 Ilmenau, Germany
| | - Mayura Gurjar Cheepinahalli Vasudeva
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
| | - Shudarsan Saravanan
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
| | - Venkatesh Thirugnana Sambandham
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
| | - Pavan Tummala
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
| | - Steffen Oeltze-Jafra
- MedDigit, Department of Neurology, Medical Faculty, University Hospital Magdeburg, 39120 Magdeburg, Germany;
- German Centre for Neurodegenerative Diseases, 37075 Magdeburg, Germany
- Centre for Behavioural Brain Sciences, 39106 Magdeburg, Germany
- Peter L. Reichertz Institute for Medical Informatics, Hannover Medical School, 30625 Hannover, Germany
| | - Oliver Speck
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (A.S.); (O.S.)
- German Centre for Neurodegenerative Diseases, 37075 Magdeburg, Germany
- Centre for Behavioural Brain Sciences, 39106 Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany
- Faculty of Computer Science, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany; (M.D.); (A.B.T.A.); (M.G.C.V.); (S.S.); (V.T.S.); (P.T.)
- Centre for Behavioural Brain Sciences, 39106 Magdeburg, Germany
| |
Collapse
|
2
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
3
|
Liu X, Pang Y, Liu Y, Jin R, Sun Y, Liu Y, Xiao J. Dual-domain faster Fourier convolution based network for MR image reconstruction. Comput Biol Med 2024; 177:108603. [PMID: 38781646 DOI: 10.1016/j.compbiomed.2024.108603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 04/15/2024] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Deep learning methods for fast MRI have shown promise in reconstructing high-quality images from undersampled multi-coil k-space data, leading to reduced scan duration. However, existing methods encounter challenges related to limited receptive fields in dual-domain (k-space and image domains) reconstruction networks, rigid data consistency operations, and suboptimal refinement structures, which collectively restrict overall reconstruction performance. This study introduces a comprehensive framework that addresses these challenges and enhances MR image reconstruction quality. Firstly, we propose Faster Inverse Fourier Convolution (FasterIFC), a frequency domain convolutional operator that significantly expands the receptive field of k-space domain reconstruction networks. Expanding the information extraction range to the entire frequency spectrum according to the spectral convolution theorem in Fourier theory enables the network to easily utilize richer redundant long-range information from adjacent, symmetrical, and diagonal locations of multi-coil k-space data. Secondly, we introduce a novel softer Data Consistency (softerDC) layer, which achieves an enhanced balance between data consistency and smoothness. This layer facilitates the implementation of diverse data consistency strategies across distinct frequency positions, addressing the inflexibility observed in current methods. Finally, we present the Dual-Domain Faster Fourier Convolution Based Network (D2F2), which features a centrosymmetric dual-domain parallel structure based on FasterIFC. This architecture optimally leverages dual-domain data characteristics while substantially expanding the receptive field in both domains. Coupled with the softerDC layer, D2F2 demonstrates superior performance on the NYU fastMRI dataset at multiple acceleration factors, surpassing state-of-the-art methods in both quantitative and qualitative evaluations.
Collapse
Affiliation(s)
- Xiaohan Liu
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China; Tiandatz Technology Co. Ltd., Tianjin, 300072, China.
| | - Yanwei Pang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yiming Liu
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Ruiqi Jin
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yong Sun
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yu Liu
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Jing Xiao
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China; Department of Economic Management, Hebei Chemical and Pharmaceutical College, Shijiazhuang, Hebei, 050026, China.
| |
Collapse
|
4
|
Wang B, Lian Y, Xiong X, Zhou H, Liu Z, Zhou X. DCT-net: Dual-domain cross-fusion transformer network for MRI reconstruction. Magn Reson Imaging 2024; 107:69-79. [PMID: 38237693 DOI: 10.1016/j.mri.2024.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/26/2023] [Accepted: 01/14/2024] [Indexed: 01/22/2024]
Abstract
Current challenges in Magnetic Resonance Imaging (MRI) include long acquisition times and motion artifacts. To address these issues, under-sampled k-space acquisition has gained popularity as a fast imaging method. However, recovering fine details from under-sampled data remains challenging. In this study, we introduce a pioneering deep learning approach, namely DCT-Net, designed for dual-domain MRI reconstruction. DCT-Net seamlessly integrates information from the image domain (IRM) and frequency domain (FRM), utilizing a novel Cross Attention Block (CAB) and Fusion Attention Block (FAB). These innovative blocks enable precise feature extraction and adaptive fusion across both domains, resulting in a significant enhancement of the reconstructed image quality. The adaptive interaction and fusion mechanisms of CAB and FAB contribute to the method's effectiveness in capturing distinctive features and optimizing image reconstruction. Comprehensive ablation studies have been conducted to assess the contributions of these modules to reconstruction quality and accuracy. Experimental results on the FastMRI (2023) and Calgary-Campinas datasets (2021) demonstrate the superiority of our MRI reconstruction framework over other typical methods (most are illustrated in 2023 or 2022) in both qualitative and quantitative evaluations. This holds for knee and brain datasets under 4× and 8× accelerated imaging scenarios.
Collapse
Affiliation(s)
- Bin Wang
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China; School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Yusheng Lian
- School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Xingchuang Xiong
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China.
| | - Han Zhou
- School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing 102600, China
| | - Zilong Liu
- National Institute of Metrology, Beijing 100029, China; Key Laboratory of Metrology Digitalization and Digital Metrology for State Market Regulation, Beijing 100029, China.
| | - Xiaohao Zhou
- State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China.
| |
Collapse
|
5
|
Murray V, Siddiq S, Crane C, El Homsi M, Kim TH, Wu C, Otazo R. Movienet: Deep space-time-coil reconstruction network without k-space data consistency for fast motion-resolved 4D MRI. Magn Reson Med 2024; 91:600-614. [PMID: 37849064 PMCID: PMC10842259 DOI: 10.1002/mrm.29892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/19/2023]
Abstract
PURPOSE To develop a novel deep learning approach for 4D-MRI reconstruction, named Movienet, which exploits space-time-coil correlations and motion preservation instead of k-space data consistency, to accelerate the acquisition of golden-angle radial data and enable subsecond reconstruction times in dynamic MRI. METHODS Movienet uses a U-net architecture with modified residual learning blocks that operate entirely in the image domain to remove aliasing artifacts and reconstruct an unaliased motion-resolved 4D image. Motion preservation is enforced by sorting the input image and reference for training in a linear motion order from expiration to inspiration. The input image was collected with a lower scan time than the reference XD-GRASP image used for training. Movienet is demonstrated for motion-resolved 4D MRI and motion-resistant 3D MRI of abdominal tumors on a therapeutic 1.5T MR-Linac (1.5-fold acquisition acceleration) and diagnostic 3T MRI scanners (2-fold and 2.25-fold acquisition acceleration for 4D and 3D, respectively). Image quality was evaluated quantitatively and qualitatively by expert clinical readers. RESULTS The reconstruction time of Movienet was 0.69 s (4 motion states) and 0.75 s (10 motion states), which is substantially lower than iterative XD-GRASP and unrolled reconstruction networks. Movienet enables faster acquisition than XD-GRASP with similar overall image quality and improved suppression of streaking artifacts. CONCLUSION Movienet accelerates data acquisition with respect to compressed sensing and reconstructs 4D images in less than 1 s, which would enable an efficient implementation of 4D MRI in a clinical setting for fast motion-resistant 3D anatomical imaging or motion-resolved 4D imaging.
Collapse
Affiliation(s)
- Victor Murray
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Syed Siddiq
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Christopher Crane
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Maria El Homsi
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Tae-Hyung Kim
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Can Wu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Ricardo Otazo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
6
|
Qu B, Zhang J, Kang T, Lin J, Lin M, She H, Wu Q, Wang M, Zheng G. Radial magnetic resonance image reconstruction with a deep unrolled projected fast iterative soft-thresholding network. Comput Biol Med 2024; 168:107707. [PMID: 38000244 DOI: 10.1016/j.compbiomed.2023.107707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/31/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Radially sampling of magnetic resonance imaging (MRI) is an effective way to accelerate the imaging. How to preserve the image details in reconstruction is always challenging. In this work, a deep unrolled neural network is designed to emulate the iterative sparse image reconstruction process of a projected fast soft-threshold algorithm (pFISTA). The proposed method, an unrolled pFISTA network for Deep Radial MRI (pFISTA-DR), include the preprocessing module to refine coil sensitivity maps and initial reconstructed image, the learnable convolution filters to extract image feature maps, and adaptive threshold to robustly remove image artifacts. Experimental results show that, among the compared methods, pFISTA-DR provides the best reconstruction and achieved the highest PSNR, the highest SSIM and the lowest reconstruction errors.
Collapse
Affiliation(s)
- Biao Qu
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, China
| | - Jialue Zhang
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, China; Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Department of Electronic Science, Xiamen University, China
| | - Taishan Kang
- Department of Radiology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jianzhong Lin
- Department of Radiology, Zhongshan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Meijin Lin
- Department of Applied Marine Physics & Engineering, College of Ocean and Earth Sciences, Xiamen University, Xiamen, China
| | - Huajun She
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qingxia Wu
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China; Laboratory of Brain Science and Brain-Like Intelligence Technology, Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| | - Gaofeng Zheng
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, China.
| |
Collapse
|
7
|
Li X, Wu Q, Wang M, Wu K. Uncertainty-aware network for fine-grained and imbalanced reflux esophagitis grading. Comput Biol Med 2024; 168:107751. [PMID: 38016373 DOI: 10.1016/j.compbiomed.2023.107751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 10/22/2023] [Accepted: 11/20/2023] [Indexed: 11/30/2023]
Abstract
Computer-aided diagnosis (CAD) assists endoscopists in analyzing endoscopic images, reducing misdiagnosis rates and enabling timely treatment. A few studies have focused on CAD for gastroesophageal reflux disease, but CAD studies on reflux esophagitis (RE) are still inadequate. This paper presents a CAD study on RE using a dataset collected from hospital, comprising over 3000 images. We propose an uncertainty-aware network with handcrafted features, utilizing representation and classifier decoupling with metric learning to address class imbalance and achieve fine-grained RE classification. To enhance interpretability, the network estimates uncertainty through test time augmentation. The experimental results demonstrate that the proposed network surpasses previous methods, achieving an accuracy of 90.2% and an F1 score of 90.1%.
Collapse
Affiliation(s)
- Xingcun Li
- School of Management, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Qinghua Wu
- School of Management, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Mi Wang
- Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China.
| | - Kun Wu
- Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| |
Collapse
|
8
|
Torfeh T, Aouadi S, Yoganathan SA, Paloor S, Hammoud R, Al-Hammadi N. Deep Learning Approaches for Automatic Quality Assurance of Magnetic Resonance Images Using ACR Phantom. BMC Med Imaging 2023; 23:197. [PMID: 38031032 PMCID: PMC10685462 DOI: 10.1186/s12880-023-01157-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/17/2023] [Indexed: 12/01/2023] Open
Abstract
BACKGROUND In recent years, there has been a growing trend towards utilizing Artificial Intelligence (AI) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. In this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of Magnetic Resonance (MR) images using the American College of Radiology (ACR) standards. METHODS The study involved the development, optimization, and testing of custom convolutional neural network (CNN) models. Additionally, popular pre-trained models such as VGG16, VGG19, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB5 were trained and tested. The use of pre-trained models, particularly those trained on the ImageNet dataset, for transfer learning was also explored. Two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast. RESULTS Our results showed that deep learning-based methods can be effectively used for MR image quality assurance and can improve the performance of these models. The low contrast test was one of the most challenging tests within the ACR phantom. CONCLUSIONS Overall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. The study also revealed that training the models from scratch performed slightly better compared to transfer learning. For the low contrast, our investigation emphasized the adaptability and potential of deep learning models. The custom CNN models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and F1 scores.
Collapse
Affiliation(s)
- Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar.
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
9
|
Ernst P, Chatterjee S, Rose G, Speck O, Nürnberger A. Sinogram upsampling using Primal-Dual UNet for undersampled CT and radial MRI reconstruction. Neural Netw 2023; 166:704-721. [PMID: 37604079 DOI: 10.1016/j.neunet.2023.08.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 08/03/2023] [Accepted: 08/04/2023] [Indexed: 08/23/2023]
Abstract
Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932±0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919±0.016. Furthermore, the proposed model resulted in 0.903±0.019 and 0.957±0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867±0.025 and 0.949±0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.
Collapse
Affiliation(s)
- Philipp Ernst
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Genomics Research Centre, Human Technopole, Milan, Italy.
| | - Georg Rose
- Institute of Medical Engineering, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Faculty of Natural Sciences, Otto von Guericke University Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University Magdeburg, Germany; German Centre for Neurodegenerative Disease, Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| |
Collapse
|
10
|
Wang W, Shen H, Chen J, Xing F. MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution. Comput Biol Med 2023; 163:107181. [PMID: 37352637 DOI: 10.1016/j.compbiomed.2023.107181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/29/2023] [Accepted: 06/13/2023] [Indexed: 06/25/2023]
Abstract
High-quality magnetic resonance imaging (MRI) affords clear body tissue structure for reliable diagnosing. However, there is a principal problem of the trade-off between acquisition speed and image quality. Image reconstruction and super-resolution are crucial techniques to solve these problems. In the main field of MR image restoration, most researchers mainly focus on only one of these aspects, namely reconstruction or super-resolution. In this paper, we propose an efficient model called Multi-Stage Hybrid Attention Network (MHAN) that performs the multi-task of recovering high-resolution (HR) MR images from low-resolution (LR) under-sampled measurements. Our model is highlighted by three major modules: (i) an Amplified Spatial Attention Block (ASAB) capable of enhancing the differences in spatial information, (ii) a Self-Attention Block with a Data-Consistency Layer (DC-SAB), which can improve the accuracy of the extracted feature information, (iii) an Adaptive Local Residual Attention Block (ALRAB) that focuses on both spatial and channel information. MHAN employs an encoder-decoder architecture to deeply extract contextual information and a pipeline to provide spatial accuracy. Compared with the recent multi-task model T2Net, our MHAN improves by 2.759 dB in PSNR and 0.026 in SSIM with scaling factor ×2 and acceleration factor 4× on T2 modality.
Collapse
Affiliation(s)
- Wanliang Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Haoxin Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Jiacheng Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Fangsen Xing
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| |
Collapse
|
11
|
Jafari R, Do RKG, LaGratta MD, Fung M, Bayram E, Cashen T, Otazo R. GRASPNET: Fast spatiotemporal deep learning reconstruction of golden-angle radial data for free-breathing dynamic contrast-enhanced magnetic resonance imaging. NMR IN BIOMEDICINE 2023; 36:e4861. [PMID: 36305619 PMCID: PMC9898111 DOI: 10.1002/nbm.4861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 06/16/2023]
Abstract
The purpose of the current study was to develop a deep learning technique called Golden-angle RAdial Sparse Parallel Network (GRASPnet) for fast reconstruction of dynamic contrast-enhanced 4D MRI acquired with golden-angle radial k-space trajectories. GRASPnet operates in the image-time space and does not use explicit data consistency to minimize the reconstruction time. Three different network architectures were developed: (1) GRASPnet-2D: 2D convolutional kernels (x,y) and coil and contrast dimensions collapsed into a single combined dimension; (2) GRASPnet-3D: 3D kernels (x,y,t); and (3) GRASPnet-2D + time: two 3D kernels to first exploit spatial correlations (x,y,1) followed by temporal correlations (1,1,t). The networks were trained using iterative GRASP reconstruction as the reference. Free-breathing 3D abdominal imaging with contrast injection was performed on 33 patients with liver lesions using a T1-weighted golden-angle stack-of-stars pulse sequence. Ten datasets were used for testing. The three GRASPnet architectures were compared with iterative GRASP results using quantitative and qualitative analysis, including impressions from two body radiologists. The three GRASPnet techniques reduced the reconstruction time to about 13 s with similar results with respect to iterative GRASP. Among the GRASPnet techniques, GRASPnet-2D + time compared favorably in the quantitative analysis. Spatiotemporal deep learning enables reconstruction of dynamic 4D contrast-enhanced images in a few seconds, which would facilitate translation to clinical practice of compressed sensing methods that are currently limited by long reconstruction times.
Collapse
Affiliation(s)
- Ramin Jafari
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | | | | | | | | | | | - Ricardo Otazo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY
| |
Collapse
|
12
|
Wang Y, Pang Y, Tong C. DSMENet: Detail and Structure Mutually Enhancing Network for under-sampled MRI reconstruction. Comput Biol Med 2023; 154:106204. [PMID: 36716684 DOI: 10.1016/j.compbiomed.2022.106204] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 09/21/2022] [Accepted: 10/09/2022] [Indexed: 02/01/2023]
Abstract
Reconstructing zero-filled MR images (ZF) from partial k-space by convolutional neural networks (CNN) is an important way to accelerate MRI. However, due to the lack of attention to different components in ZF, it is challenging to learn the mapping from ZF to targets effectively. To ameliorate this issue, we propose a Detail and Structure Mutually Enhancing Network (DSMENet), which benefits from the complementary of the Structure Reconstruction UNet (SRUN) and the Detail Feature Refinement Module (DFRM). The SRUN learns structure-dominated information at multiple scales. And the DRFM enriches detail-dominated information from coarse to fine. The bidirectional alternate connections then exchange information between them. Moreover, the Detail Representation Construction Module (DRCM) extracts valuable initial detail representation for DFRM. And the Detail Guided Fusion Module (DGFM) facilitates the deep fusion of these complementary information. With the help of them, various components in ZF can be applied with discriminative attentions and mutually enhanced. In addition, the performance can be further improved by the Deep Enhanced Restoration (DER), a strategy based on recursion and constrain. Extensive experiments on fastMRI and CC-359 datasets demonstrate that DSMENet has robustness in terms of various body parts, under-sampling rates, and masks. Furthermore, DSMENet can achieve promising performance on qualitative and quantitative results, especially the competitive NMSE of 0.0268, PSNE of 33.7, and SSIM of 0.7808 on fastMRI 4 × single-coil knee leaderboard.
Collapse
Affiliation(s)
- Yueze Wang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yanwei Pang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Chuan Tong
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|