1
|
Yoon YH, Chun J, Kiser K, Marasini S, Curcuru A, Gach HM, Kim JS, Kim T. Inter-scanner super-resolution of 3D cine MRI using a transfer-learning network for MRgRT. Phys Med Biol 2024; 69:115038. [PMID: 38663411 DOI: 10.1088/1361-6560/ad43ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/25/2024] [Indexed: 05/30/2024]
Abstract
Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
Collapse
Affiliation(s)
- Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | | | - Kendall Kiser
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Shanti Marasini
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Austen Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - H Michael Gach
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
- Departments of Radiology and Biomedical Engineering, Washington University in St. Louis, St Louis, MO, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| |
Collapse
|
2
|
Dong Y, Yang F, Wen J, Cai J, Zeng F, Liu M, Li S, Wang J, Ford JC, Portelance L, Yang Y. Improvement of 2D cine image quality using 3D priors and cycle generative adversarial network for low field MRI-guided radiation therapy. Med Phys 2024; 51:3495-3509. [PMID: 38043123 DOI: 10.1002/mp.16860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/12/2023] [Accepted: 11/05/2023] [Indexed: 12/05/2023] Open
Abstract
BACKGROUND Cine magnetic resonance (MR) images have been used for real-time MR guided radiation therapy (MRgRT). However, the onboard MR systems with low-field strength face the problem of limited image quality. PURPOSE To improve the quality of cine MR images in MRgRT using prior image information provided by the patient planning and positioning MR images. METHODS This study employed MR images from 18 pancreatic cancer patients who received MR-guided stereotactic body radiation therapy. Planning 3D MR images were acquired during the patient simulation, and positioning 3D MR images and 2D sagittal cine MR images were acquired before and during the beam delivery, respectively. A deep learning-based framework consisting of two cycle generative adversarial networks (CycleGAN), Denoising CycleGAN and Enhancement CycleGAN, was developed to establish the mapping between the 3D and 2D MR images. The Denoising CycleGAN was trained to first denoise the cine images using the time domain cine image series, and the Enhancement CycleGAN was trained to enhance the spatial resolution and contrast by taking advantage of the prior image information from the planning and positioning images. The denoising performance was assessed by signal-to-noise ratio (SNR), structural similarity index measure, peak SNR, blind/reference-less image spatial quality evaluator (BRISQUE), natural image quality evaluator, and perception-based image quality evaluator scores. The quality enhancement performance was assessed by the BRISQUE and physician visual scores. In addition, the target contouring was evaluated on the original and processed images. RESULTS Significant differences were found for all evaluation metrics after Denoising CycleGAN processing. The BRISQUE and visual scores were also significantly improved after sequential Denoising and Enhancement CycleGAN processing. In target contouring evaluation, Dice similarity coefficient, centroid distance, Hausdorff distance, and average surface distance values were significantly improved on the enhanced images. The whole processing time was within 20 ms for a typical input image size of 512 × 512. CONCLUSION Taking advantage of the prior high-quality positioning and planning MR images, the deep learning-based framework enhanced the cine MR image quality significantly, leading to improved accuracy in automatic target contouring. With the merits of both high computational efficiency and considerable image quality enhancement, the proposed method may hold important clinical implication for real-time MRgRT.
Collapse
Affiliation(s)
- Yuyan Dong
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Fei Yang
- The Miller School of Medicine, University of Miami, Miami, Florida, USA
| | - Jie Wen
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Feiyan Zeng
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Mengqiu Liu
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Shuang Li
- Department of Radiology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Jiangtao Wang
- Cancer Center, Sichuan Academy of Medical Sciences Sichuan Provincial People's Hospital, Chengdu, Sichuan, China
| | - John Chetley Ford
- The Miller School of Medicine, University of Miami, Miami, Florida, USA
| | | | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
3
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
5
|
Matsuo K, Nakaura T, Morita K, Uetani H, Nagayama Y, Kidoh M, Hokamura M, Yamashita Y, Shinoda K, Ueda M, Mukasa A, Hirai T. Feasibility study of super-resolution deep learning-based reconstruction using k-space data in brain diffusion-weighted images. Neuroradiology 2023; 65:1619-1629. [PMID: 37673835 DOI: 10.1007/s00234-023-03212-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE The purpose of this study is to evaluate the influence of super-resolution deep learning-based reconstruction (SR-DLR), which utilizes k-space data, on the quality of images and the quantitation of the apparent diffusion coefficient (ADC) for diffusion-weighted images (DWI) in brain magnetic resonance imaging (MRI). METHODS A retrospective analysis was performed on 34 patients who had undergone DWI using a 3 T MRI system with SR-DLR reconstruction based on k-space data in August 2022. DWI was reconstructed with SR-DLR (Matrix = 684 × 684) and without SR-DLR (Matrix = 228 × 228). Measurements were made of the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) in white matter (WM) and grey matter (GM), and the full width at half maximum (FWHM) of the septum pellucidum. Two radiologists assessed image noise, contrast, artifacts, blur, and the overall quality of three image types using a four-point scale. Quantitative and qualitative scores between images with and without SR-DLR were compared using the Wilcoxon signed-rank test. RESULTS Images with SR-DLR showed significantly higher SNRs and CNRs than those without SR-DLR (p < 0.001). No statistically significant variances were found in the apparent diffusion coefficients (ADCs) in WM and GM between images with and without SR-DLR (ADC in WM, p = 0.945; ADC in GM, p = 0.235). Moreover, the FWHM without SR-DLR was notably lower compared to that with SR-DLR (p < 0.001). CONCLUSION SR-DLR has the potential to augment the quality of DWI in DL MRI scans without significantly impacting ADC quantitation.
Collapse
Affiliation(s)
- Kensei Matsuo
- Department of Central Radiology, Kumamoto University Hospital, Honjo 1-1-1, Kumamoto, 860-8556, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan.
| | - Kosuke Morita
- Department of Central Radiology, Kumamoto University Hospital, Honjo 1-1-1, Kumamoto, 860-8556, Japan
| | - Hiroyuki Uetani
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| | - Yasunori Nagayama
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| | - Masafumi Kidoh
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| | - Masamichi Hokamura
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| | - Yuichi Yamashita
- Canon Medical Systems Corporation, 70-1, Yanagi, Saiwai, Kawasaki, Kanagawa, 212-0015, Japan
| | - Kensuke Shinoda
- MRI Systems Division, Canon Medical Systems Corporation, 1385 Shimoishigami, Otawara, Tochigi, 324-8550, Japan
| | - Mitsuharu Ueda
- Department of Neurology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| | - Akitake Mukasa
- Department of Neurosurgery, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, Japan
| | - Toshinori Hirai
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, Honjo 1-1-1, Chuo, Kumamoto, 860-8556, Japan
| |
Collapse
|
6
|
Long D, McMurdo C, Ferdian E, Mauger CA, Marlevi D, Nash MP, Young AA. Super-resolution 4D flow MRI to quantify aortic regurgitation using computational fluid dynamics and deep learning. Int J Cardiovasc Imaging 2023; 39:1189-1202. [PMID: 36820960 PMCID: PMC10220149 DOI: 10.1007/s10554-023-02815-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 02/10/2023] [Indexed: 02/24/2023]
Abstract
Changes in cardiovascular hemodynamics are closely related to the development of aortic regurgitation (AR), a type of valvular heart disease. Metrics derived from blood flows are used to indicate AR onset and evaluate its severity. These metrics can be non-invasively obtained using four-dimensional (4D) flow magnetic resonance imaging (MRI), where accuracy is primarily dependent on spatial resolution. However, insufficient resolution often results from limitations in 4D flow MRI and complex aortic regurgitation hemodynamics. To address this, computational fluid dynamics simulations were transformed into synthetic 4D flow MRI data and used to train a variety of neural networks. These networks generated super-resolution, full-field phase images with an upsample factor of 4. Results showed decreased velocity error, high structural similarity scores, and improved learning capabilities from previous work. Further validation was performed on two sets of in vivo 4D flow MRI data and demonstrated success in de-noising flow images. This approach presents an opportunity to comprehensively analyse AR hemodynamics in a non-invasive manner.
Collapse
Affiliation(s)
- Derek Long
- Department of Engineering Science, University of Auckland, Auckland, New Zealand
| | - Cameron McMurdo
- Department of Engineering Science, University of Auckland, Auckland, New Zealand
| | - Edward Ferdian
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
| | - Charlène A. Mauger
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - David Marlevi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA USA
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Solna, Sweden
| | - Martyn P. Nash
- Department of Engineering Science, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Alistair A. Young
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
- Department of Biomedical Engineering, King’s College London, London, UK
| |
Collapse
|
7
|
Hunt B, Gill GS, Alexander DA, Streeter SS, Gladstone DJ, Russo GA, Zaki BI, Pogue BW, Zhang R. Fast Deformable Image Registration for Real-Time Target Tracking During Radiation Therapy Using Cine MRI and Deep Learning. Int J Radiat Oncol Biol Phys 2023; 115:983-993. [PMID: 36309075 DOI: 10.1016/j.ijrobp.2022.09.086] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 08/10/2022] [Accepted: 09/07/2022] [Indexed: 11/07/2022]
Abstract
PURPOSE We developed a deep learning (DL) model for fast deformable image registration using 2-dimensional sagittal cine magnetic resonance imaging (MRI) acquired during radiation therapy and evaluated its potential for real-time target tracking compared with conventional image registration methods. METHODS AND MATERIALS Our DL model uses a pair of cine MRI images as input and provides a motion vector field (MVF) as output. The MVF is then applied to align the input images. A retrospective study was conducted to train and evaluate our model using cine MRI data from patients undergoing treatment for abdominal and thoracic tumors. For each treatment fraction, MR-linear accelerator delivery log files, tracking videos, and cine image files were analyzed. Individual MRI frames were temporally sampled to construct a large set of image registration pairs used to evaluate multiple methods. The DL model was optimized using 5-fold cross validation, and model outputs (transformed images and MVFs) using test set images were saved for comparison with 3 conventional registration methods (affine, b-spline, and demons). Evaluation metrics were 3-fold: (1) registration error, (2) MVF stability (both spatial and temporal), and (3) average computation time. RESULTS We analyzed >21 hours of cine MRI (>629,000 frames) acquired during 86 treatment fractions from 21 patients. In a test set of 10,320 image registration pairs, DL registration outperformed conventional methods in both registration error (affine, b-spline, demons, DL; root mean square error: 0.067, 0.040, 0.036, 0.032; paired t test demons vs DL: t[20] = 4.2, P < .001) and computation time per frame (51, 1150, 4583, 8 ms). Among deformable methods, spatial stability of resulting MVFs was comparable; however, the DL model had significantly improved temporal consistency. CONCLUSIONS DL-based image registration can leverage large-scale MR cine data sets to outperform conventional registration methods and is a promising solution for real-time deformable motion estimation in radiation therapy.
Collapse
Affiliation(s)
- Brady Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire; Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire; Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.
| | - Gobind S Gill
- Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | | | - Samuel S Streeter
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - David J Gladstone
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire; Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire; Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Gregory A Russo
- Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire; Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Bassem I Zaki
- Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire; Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Brian W Pogue
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin
| | - Rongxiao Zhang
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire; Geisel School of Medicine, Dartmouth College, Hanover, New Hampshire; Dartmouth Cancer Center, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| |
Collapse
|
8
|
Mannam V, Howard S. Small training dataset convolutional neural networks for application-specific super-resolution microscopy. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:036501. [PMID: 36925620 PMCID: PMC10013193 DOI: 10.1117/1.jbo.28.3.036501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/09/2023] [Indexed: 06/18/2023]
Abstract
SIGNIFICANCE Machine learning (ML) models based on deep convolutional neural networks have been used to significantly increase microscopy resolution, speed [signal-to-noise ratio (SNR)], and data interpretation. The bottleneck in developing effective ML systems is often the need to acquire large datasets to train the neural network. We demonstrate how adding a "dense encoder-decoder" (DenseED) block can be used to effectively train a neural network that produces super-resolution (SR) images from conventional microscopy diffraction-limited (DL) images trained using a small dataset [15 fields of view (FOVs)]. AIM The ML helps to retrieve SR information from a DL image when trained with a massive training dataset. The aim of this work is to demonstrate a neural network that estimates SR images from DL images using modifications that enable training with a small dataset. APPROACH We employ "DenseED" blocks in existing SR ML network architectures. DenseED blocks use a dense layer that concatenates features from the previous convolutional layer to the next convolutional layer. DenseED blocks in fully convolutional networks (FCNs) estimate the SR images when trained with a small training dataset (15 FOVs) of human cells from the Widefield2SIM dataset and in fluorescent-labeled fixed bovine pulmonary artery endothelial cells samples. RESULTS Conventional ML models without DenseED blocks trained on small datasets fail to accurately estimate SR images while models including the DenseED blocks can. The average peak SNR (PSNR) and resolution improvements achieved by networks containing DenseED blocks are ≈ 3.2 dB and 2 × , respectively. We evaluated various configurations of target image generation methods (e.g., experimentally captured a target and computationally generated target) that are used to train FCNs with and without DenseED blocks and showed that including DenseED blocks in simple FCNs outperforms compared to simple FCNs without DenseED blocks. CONCLUSIONS DenseED blocks in neural networks show accurate extraction of SR images even if the ML model is trained with a small training dataset of 15 FOVs. This approach shows that microscopy applications can use DenseED blocks to train on smaller datasets that are application-specific imaging platforms and there is promise for applying this to other imaging modalities, such as MRI/x-ray, etc.
Collapse
Affiliation(s)
- Varun Mannam
- University of Notre Dame, Department of Electrical Engineering, Notre Dame, Indiana, United States
| | - Scott Howard
- University of Notre Dame, Department of Electrical Engineering, Notre Dame, Indiana, United States
| |
Collapse
|
9
|
Bandwidth Improvement in Ultrasound Image Reconstruction Using Deep Learning Techniques. Healthcare (Basel) 2022; 11:healthcare11010123. [PMID: 36611583 PMCID: PMC9819580 DOI: 10.3390/healthcare11010123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 12/24/2022] [Accepted: 12/29/2022] [Indexed: 01/03/2023] Open
Abstract
Ultrasound (US) imaging is a medical imaging modality that uses the reflection of sound in the range of 2-18 MHz to image internal body structures. In US, the frequency bandwidth (BW) is directly associated with image resolution. BW is a property of the transducer and more bandwidth comes at a higher cost. Thus, methods that can transform strongly bandlimited ultrasound data into broadband data are essential. In this work, we propose a deep learning (DL) technique to improve the image quality for a given bandwidth by learning features provided by broadband data of the same field of view. Therefore, the performance of several DL architectures and conventional state-of-the-art techniques for image quality improvement and artifact removal have been compared on in vitro US datasets. Two training losses have been utilized on three different architectures: a super resolution convolutional neural network (SRCNN), U-Net, and a residual encoder decoder network (REDNet) architecture. The models have been trained to transform low-bandwidth image reconstructions to high-bandwidth image reconstructions, to reduce the artifacts, and make the reconstructions visually more attractive. Experiments were performed for 20%, 40%, and 60% fractional bandwidth on the original images and showed that the improvements obtained are as high as 45.5% in RMSE, and 3.85 dB in PSNR, in datasets with a 20% bandwidth limitation.
Collapse
|
10
|
Grandinetti J, Gao Y, Gonzalez Y, Deng J, Shen C, Jia X. MR image reconstruction from undersampled data for image-guided radiation therapy using a patient-specific deep manifold image prior. Front Oncol 2022; 12:1013783. [PMID: 36479074 PMCID: PMC9720169 DOI: 10.3389/fonc.2022.1013783] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Accepted: 10/31/2022] [Indexed: 06/13/2024] Open
Abstract
Introduction Recent advancements in radiotherapy (RT) have allowed for the integration of a Magnetic Resonance (MR) imaging scanner with a medical linear accelerator to use MR images for image guidance to position tumors against the treatment beam. Undersampling in MR acquisition is desired to accelerate the imaging process, but unavoidably deteriorates the reconstructed image quality. In RT, a high-quality MR image of a patient is available for treatment planning. In light of this unique clinical scenario, we proposed to exploit the patient-specific image prior to facilitate high-quality MR image reconstruction. Methods Utilizing the planning MR image, we established a deep auto-encoder to form a manifold of image patches of the patient. The trained manifold was then incorporated as a regularization to restore MR images of the same patient from undersampled data. We performed a simulation study using a patient case, a real patient study with three liver cancer patient cases, and a phantom experimental study using data acquired on an in-house small animal MR scanner. We compared the performance of the proposed method with those of the Fourier transform method, a tight-frame based Compressive Sensing method, and a deep learning method with a patient-generic manifold as the image prior. Results In the simulation study with 12.5% radial undersampling and 15% increase in noise, our method improved peak-signal-to-noise ratio by 4.46dB and structural similarity index measure by 28% compared to the patient-generic manifold method. In the experimental study, our method outperformed others by producing reconstructions of visually improved image quality.
Collapse
Affiliation(s)
| | | | | | | | | | - Xun Jia
- Innovative Technology of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States
| |
Collapse
|
11
|
Li BM, Castorina LV, Valdés Hernández MDC, Clancy U, Wiseman SJ, Sakka E, Storkey AJ, Jaime Garcia D, Cheng Y, Doubal F, Thrippleton MT, Stringer M, Wardlaw JM. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols. Front Comput Neurosci 2022; 16:887633. [PMID: 36093418 PMCID: PMC9458316 DOI: 10.3389/fncom.2022.887633] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 07/20/2022] [Indexed: 11/13/2022] Open
Abstract
Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans (PSNR = 35.39; MAE = 3.78E−3; NMSE = 4.32E−10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.
Collapse
Affiliation(s)
- Bryan M. Li
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Maria del C. Valdés Hernández
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- *Correspondence: Maria del C. Valdés Hernández
| | - Una Clancy
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Stroke Clinic, National Health Service Lothian, Edinburgh, United Kingdom
| | - Stewart J. Wiseman
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Eleni Sakka
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Amos J. Storkey
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Daniela Jaime Garcia
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Yajun Cheng
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Fergus Doubal
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Stroke Clinic, National Health Service Lothian, Edinburgh, United Kingdom
| | | | - Michael Stringer
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Joanna M. Wardlaw
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Stroke Clinic, National Health Service Lothian, Edinburgh, United Kingdom
| |
Collapse
|
12
|
Li G, Wu X, Ma X. Artificial intelligence in radiotherapy. Semin Cancer Biol 2022; 86:160-171. [PMID: 35998809 DOI: 10.1016/j.semcancer.2022.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/18/2022] [Indexed: 11/19/2022]
Abstract
Radiotherapy is a discipline closely integrated with computer science. Artificial intelligence (AI) has developed rapidly over the past few years. With the explosive growth of medical big data, AI promises to revolutionize the field of radiotherapy through highly automated workflow, enhanced quality assurance, improved regional balances of expert experiences, and individualized treatment guided by multi-omics. In addition to independent researchers, the increasing number of large databases, biobanks, and open challenges significantly facilitated AI studies on radiation oncology. This article reviews the latest research, clinical applications, and challenges of AI in each part of radiotherapy including image processing, contouring, planning, quality assurance, motion management, and outcome prediction. By summarizing cutting-edge findings and challenges, we aim to inspire researchers to explore more future possibilities and accelerate the arrival of AI radiotherapy.
Collapse
Affiliation(s)
- Guangqi Li
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xin Wu
- Head & Neck Oncology ward, Division of Radiotherapy Oncology, Cancer Center, West China Hospital, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xuelei Ma
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China.
| |
Collapse
|
13
|
Yaqub M, Jinchao F, Arshid K, Ahmed S, Zhang W, Nawaz MZ, Mahmood T. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8750648. [PMID: 35756423 PMCID: PMC9225884 DOI: 10.1155/2022/8750648] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/12/2022] [Accepted: 05/21/2022] [Indexed: 02/08/2023]
Abstract
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Wenqian Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Zubair Nawaz
- College of Science and Shanghai Institute of Intelligent Electronics and Systems, Donghua University, 24105 Songjiang District, Shanghai, China
| | - Tariq Mahmood
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
14
|
Xie H, Lei Y, Wang T, Roper J, Dhabaan AH, Bradley JD, Liu T, Mao H, Yang X. Synthesizing high-resolution magnetic resonance imaging using parallel cycle-consistent generative adversarial networks for fast magnetic resonance imaging. Med Phys 2022; 49:357-369. [PMID: 34821395 DOI: 10.1002/mp.15380] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The common practice in acquiring the magnetic resonance (MR) images is to obtain two-dimensional (2D) slices at coarse locations while keeping the high in-plane resolution in order to ensure enough body coverage while shortening the MR scan time. The aim of this study is to propose a novel method to generate HR MR images from low-resolution MR images along the longitudinal direction. In order to address the difficulty of collecting paired low- and high-resolution MR images in clinical settings and to gain the advantage of parallel cycle consistent generative adversarial networks (CycleGANs) in synthesizing realistic medical images, we developed a parallel CycleGANs based method using a self-supervised strategy. METHODS AND MATERIALS The proposed workflow consists of two parallely trained CycleGANs to independently predict the HR MR images in the two planes along the directions that are orthogonal to the longitudinal MR scan direction. Then, the final synthetic HR MR images are generated by fusing the two predicted images. MR images, including T1-weighted (T1), contrast enhanced T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR), of the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to evaluate the proposed workflow along the cranial-caudal (CC), lateral, and anterior-posterior directions. Institutional collected MR images were also processed for evaluation of the proposed method. The performance of the proposed method was investigated via both qualitative and quantitative evaluations. Metrics of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), edge keeping index (EKI), structural similarity index measurement (SSIM), information fidelity criterion (IFC), and visual information fidelity in pixel domain (VIFP) were calculated. RESULTS It is shown that the proposed method can generate HR MR images visually indistinguishable from the ground truth in the investigations on the BraTS2020 dataset. In addition, the intensity profiles, difference images and SSIM maps can also confirm the feasibility of the proposed method for synthesizing HR MR images. Quantitative evaluations on the BraTS2020 dataset shows that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2, and FLAIR images. The enhancements in the numerical metrics over the low-resolution and bi-cubic interpolated MR images, as well as those genearted with a comparative deep learning method, are statistically significant. Qualitative evaluation of the synthetic HR MR images of the clinical collected dataset could also confirm the feasibility of the proposed method. CONCLUSIONS The proposed method is feasible to synthesize HR MR images using self-supervised parallel CycleGANs, which can be expected to shorten MR acquisition time in clinical practices.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Anees H Dhabaan
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
15
|
Lipin M, Bennett J, Ying GS, Yu Y, Ashtari M. Improving the Quantification of the Lateral Geniculate Nucleus in Magnetic Resonance Imaging Using a Novel 3D-Edge Enhancement Technique. Front Comput Neurosci 2021; 15:708866. [PMID: 34924983 PMCID: PMC8677828 DOI: 10.3389/fncom.2021.708866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 11/02/2021] [Indexed: 11/13/2022] Open
Abstract
The lateral geniculate nucleus (LGN) is a small, inhomogeneous structure that relays major sensory inputs from the retina to the visual cortex. LGN morphology has been intensively studied due to various retinal diseases, as well as in the context of normal brain development. However, many of the methods used for LGN structural evaluations have not adequately addressed the challenges presented by the suboptimal routine MRI imaging of this structure. Here, we propose a novel method of edge enhancement that allows for high reliability and accuracy with regard to LGN morphometry, using routine 3D-MRI imaging protocols. This new algorithm is based on modeling a small brain structure as a polyhedron with its faces, edges, and vertices fitted with one plane, the intersection of two planes, and the intersection of three planes, respectively. This algorithm dramatically increases the contrast-to-noise ratio between the LGN and its surrounding structures as well as doubling the original spatial resolution. To show the algorithm efficacy, two raters (MA and ML) measured LGN volumes bilaterally in 19 subjects using the edge-enhanced LGN extracted areas from the 3D-T1 weighted images. The averages of the left and right LGN volumes from the two raters were 175 ± 8 and 174 ± 9 mm3, respectively. The intra-class correlations between raters were 0.74 for the left and 0.81 for the right LGN volumes. The high contrast edge-enhanced LGN images presented here, from a 7-min routine 3T-MRI acquisition, is qualitatively comparable to previously reported LGN images that were acquired using a proton density sequence with 30–40 averages and 1.5-h of acquisition time. The proposed edge-enhancement algorithm is not limited only to the LGN, but can significantly improve the contrast-to-noise ratio of any small deep-seated gray matter brain structure that is prone to high-levels of noise and partial volume effects, and can also increase their morphometric accuracy and reliability. An immensely useful feature of the proposed algorithm is that it can be used retrospectively on noisy and low contrast 3D brain images previously acquired as part of any routine clinical MRI visit.
Collapse
Affiliation(s)
- Mikhail Lipin
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Jean Bennett
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Gui-Shuang Ying
- Center for Preventative Ophthalmology and Biostatistics, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Yinxi Yu
- Center for Preventative Ophthalmology and Biostatistics, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Manzar Ashtari
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
16
|
Zhang YY, Zhao H, Lin JY, Wu SN, Liu XW, Zhang HD, Shao Y, Yang WF. Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Front Med (Lausanne) 2021; 8:774344. [PMID: 34901091 PMCID: PMC8655877 DOI: 10.3389/fmed.2021.774344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 11/04/2021] [Indexed: 02/05/2023] Open
Abstract
Background: In recent years, deep learning has been widely used in a variety of ophthalmic diseases. As a common ophthalmic disease, meibomian gland dysfunction (MGD) has a unique phenotype in in-vivo laser confocal microscope imaging (VLCMI). The purpose of our study was to investigate a deep learning algorithm to differentiate and classify obstructive MGD (OMGD), atrophic MGD (AMGD) and normal groups. Methods: In this study, a multi-layer deep convolution neural network (CNN) was trained using VLCMI from OMGD, AMGD and healthy subjects as verified by medical experts. The automatic differential diagnosis of OMGD, AMGD and healthy people was tested by comparing its image-based identification of each group with the medical expert diagnosis. The CNN was trained and validated with 4,985 and 1,663 VLCMI images, respectively. By using established enhancement techniques, 1,663 untrained VLCMI images were tested. Results: In this study, we included 2,766 healthy control VLCMIs, 2,744 from OMGD and 2,801 from AMGD. Of the three models, differential diagnostic accuracy of the DenseNet169 CNN was highest at over 97%. The sensitivity and specificity of the DenseNet169 model for OMGD were 88.8 and 95.4%, respectively; and for AMGD 89.4 and 98.4%, respectively. Conclusion: This study described a deep learning algorithm to automatically check and classify VLCMI images of MGD. By optimizing the algorithm, the classifier model displayed excellent accuracy. With further development, this model may become an effective tool for the differential diagnosis of MGD.
Collapse
Affiliation(s)
- Ye-Ye Zhang
- Department of Electronic Engineering, School of Science, Hainan University, Haikou, China.,Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Hui Zhao
- Department of Ophthalmology, Shanghai First People's Hospital, Shanghai Jiao Tong University, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Jin-Yan Lin
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
| | - Shi-Nan Wu
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xi-Wang Liu
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Hong-Dan Zhang
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Yi Shao
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Wei-Feng Yang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China.,Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| |
Collapse
|
17
|
Chun J, Park JC, Olberg S, Zhang Y, Nguyen D, Wang J, Kim JS, Jiang S. Intentional deep overfit learning (IDOL): A novel deep learning strategy for adaptive radiation therapy. Med Phys 2021; 49:488-496. [PMID: 34791672 DOI: 10.1002/mp.15352] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 09/28/2021] [Accepted: 11/03/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). METHODS Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest ( N + 1 ) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. RESULTS In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. CONCLUSIONS In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - You Zhang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
18
|
Liu G, Cao Z, Xu Q, Zhang Q, Yang F, Xie X, Hao J, Shi Y, Bernhardt BC, He Y, Shi F, Lu G, Zhang Z. Recycling diagnostic MRI for empowering brain morphometric research - Critical & practical assessment on learning-based image super-resolution. Neuroimage 2021; 245:118687. [PMID: 34732323 DOI: 10.1016/j.neuroimage.2021.118687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/17/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022] Open
Abstract
Preliminary studies have shown the feasibility of deep learning (DL)-based super-resolution (SR) technique for reconstructing thick-slice/gap diagnostic MR images into high-resolution isotropic data, which would be of great significance for brain research field if the vast amount of diagnostic MRI data could be successively put into brain morphometric study. However, less evidence has addressed the practicability of the strategy, because lack of a large-sample available real data for constructing DL model. In this work, we employed a large cohort (n = 2052) of peculiar data with both low through-plane resolution diagnostic and high-resolution isotropic brain MR images from identical subjects. By leveraging a series of SR approaches, including a proposed novel DL algorithm of Structure Constrained Super Resolution Network (SCSRN), the diagnostic images were transformed to high-resolution isotropic data to meet the criteria of brain research in voxel-based and surface-based morphometric analyses. We comprehensively assessed image quality and the practicability of the reconstructed data in a variety of morphometric analysis scenarios. We further compared the performance of SR approaches to the ground truth high-resolution isotropic data. The results showed (i) DL-based SR algorithms generally improve the quality of diagnostic images and render morphometric analysis more accurate, especially, with the most superior performance of the novel approach of SCSRN. (ii) Accuracies vary across brain structures and methods, and (iii) performance increases were higher for voxel than for surface based approaches. This study supports that DL-based image super-resolution potentially recycle huge amount of routine diagnostic brain MRI deposited in sleeping state, and turning them into useful data for neurometric research.
Collapse
Affiliation(s)
- Gaoping Liu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Zehong Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qiang Xu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Qirui Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Fang Yang
- Department of Neurology, Jinling Hospital, Nanjing University School of Medicine, Nanjing 210002, China
| | - Xinyu Xie
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Jingru Hao
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Yinghuan Shi
- Department of Computer Science and Technology, Nanjing University, Nanjing 210046, China
| | - Boris C Bernhardt
- Multimodal Imaging and Connectome Analysis Laboratory, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada
| | - Yichu He
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Guangming Lu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| | - Zhiqiang Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| |
Collapse
|
19
|
Huang B, Xiao H, Liu W, Zhang Y, Wu H, Wang W, Yang Y, Yang Y, Miller GW, Li T, Cai J. MRI super-resolution via realistic downsampling with adversarial learning. Phys Med Biol 2021; 66. [PMID: 34474407 DOI: 10.1088/1361-6560/ac232e] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/02/2021] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL) frameworks have demonstrated state-of-the-art performance in the super-resolution (SR) task of magnetic resonance imaging, but most performances have been achieved with simulated low-resolution (LR) images rather than LR images from real acquisition. Due to the limited generalizability of the SR network, enhancement is not guaranteed for real LR images because of the unreality of the training LR images. In this study, we proposed a DL-based SR framework with an emphasis on data construction to achieve better performance on real LR MR images. The framework comprised two steps: (a) downsampling training using a generative adversarial network (GAN) to construct more realistic and perfectly matched LR/high-resolution (HR) pairs. The downsampling GAN input was real LR and HR images. The generator translated the HR images to LR images and the discriminator distinguished the patch-level difference between the synthetic and real LR images. (b) SR training was performed using an enhance4d deep super-resolution network (EDSR). In the controlled experiments, three EDSRs were trained using our proposed method, Gaussian blur, and k-space zero-filling. As for the data, liver MR images were obtained from 24 patients using breath-hold serial LR and HR scans (only HR images were used in the conventional methods). The k-space zero-filling group delivered almost zero enhancement on the real LR images and the Gaussian group produced a considerable number of artifacts. The proposed method exhibited significantly better resolution enhancement and fewer artifacts compared with the other two networks. Our method outperformed the Gaussian method by an improvement of 0.111 ± 0.016 in the structural similarity index and 2.76 ± 0.98 dB in the peak signal-to-noise ratio. The blind/reference-less image spatial quality evaluator metric of the conventional Gaussian method and proposed method were 46.6 ± 4.2 and 34.1 ± 2.4, respectively.
Collapse
Affiliation(s)
- Bangyan Huang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yunhuan Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - G Wilson Miller
- Department of Radiology and Medical Imaging, The University of Virginia, Charlottesville, VA, United States of America
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| |
Collapse
|
20
|
Deep Learning-Based Image Feature with Arthroscopy-Aided Early Diagnosis and Treatment of Meniscus Injury of Knee Joint. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:2254594. [PMID: 34567478 PMCID: PMC8463205 DOI: 10.1155/2021/2254594] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/27/2021] [Accepted: 08/30/2021] [Indexed: 12/11/2022]
Abstract
The aim of this study is to explore the clinical effect of deep learning-based MRI-assisted arthroscopy in the early treatment of knee meniscus sports injury. Based on convolutional neural network algorithm, Adam algorithm was introduced to optimize it, and the magnetic resonance imaging (MRI) image super-resolution reconstruction model (SRCNN) was established. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were compared between SRCNN and other algorithms. Sixty patients with meniscus injury of knee joint were studied. Arthroscopic surgery was performed according to the patients' actual type of injury, and knee scores were evaluated for all patients. Then, postoperative scores and MRI results were analyzed. The results showed that the PSNR and SSIM values of the SRCNN algorithm were (42.19 ± 4.37) dB and 0.9951, respectively, which were significantly higher than those of other algorithms (P < 0.05). Among patients with meniscus injury, 17 cases (28.33%) were treated with meniscus suture, 39 cases (65.00%) underwent secondary resection, 3 cases (5.00%) underwent partial resection, and 1 case (1.67%) underwent full resection. After meniscus suture, secondary resection, partial resection, and total resection, the knee function scores of patients after treatment were (83.17 ± 8.63), (80.06 ± 7.96), (84.34 ± 7.74), and (85.52 ± 5.97), respectively. There was no great difference in knee function scores after different methods of treatment (P > 0.05), and there were considerable differences compared with those before treatment (P < 0.01). Compared with the results of arthroscopy, there was no significant difference in the grading of meniscus injury by MRI (P > 0.05). To sum up, the SRCNN algorithm based on the deep convolutional network algorithm improved the MRI image quality and the diagnosis of knee meniscus injuries. Arthroscopic knee surgery had good results and had great clinical application and promotion value.
Collapse
|
21
|
Chun J, Lewis B, Ji Z, Shin JI, Park JC, Kim JS, Kim T. Evaluation of super-resolution on 50 pancreatic cancer patients with real-time cine MRI from 0.35T MRgRT. Biomed Phys Eng Express 2021; 7:055020. [PMID: 34375963 DOI: 10.1088/2057-1976/ac1c51] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/10/2021] [Indexed: 12/25/2022]
Abstract
MR-guided radiotherapy (MRgRT) systems provide excellent soft tissue imaging immediately prior to and in real time during radiation delivery for cancer treatment. However, 2D cine MRI often has limited spatial resolution due to high temporal resolution. This work applies a super resolution machine learning framework to 3.5 mm pixel edge length, low resolution (LR), sagittal 2D cine MRI images acquired on a MRgRT system to generate 0.9 mm pixel edge length, super resolution (SR), images originally acquired at 4 frames per second (FPS). LR images were collected from 50 pancreatic cancer patients treated on a ViewRay MR-LINAC. SR images were evaluated using three methods. 1) The first method utilized intrinsic image quality metrics for evaluation. 2) The second used relative metrics including edge detection and structural similarity index (SSIM). 3) Finally, automatically generated tumor contours were created on both low resolution and super resolution images to evaluate target delineation and compared with DICE and SSIM. Intrinsic image quality metrics all had statistically significant improvements for SR images versus LR images, with mean (±1 SD) BRISQUE scores of 29.65 ± 2.98 and 42.48 ± 0.98 for SR and LR, respectively. SR images showed good agreement with LR images in SSIM evaluation, indicating there was not significant distortion of the images. Comparison of LR and SR images with paired high resolution (HR) 3D images showed that SR images had a mean (±1 SD) SSIM value of 0.633 ± 0.063 and LR a value of 0.587 ± 0.067 (p ≪ 0.05). Contours generated on SR images were also more robust to noise addition than those generated on LR images. This study shows that super resolution with a machine learning framework can generate high spatial resolution images from 4fps low spatial resolution cine MRI acquired on the ViewRay MR-LINAC while maintaining tumor contour quality and without significant acquisition or post processing delay.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Benjamin Lewis
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO 63110, United States of America
| | - Zhen Ji
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO 63110, United States of America
| | - Jae-Ik Shin
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Justin C Park
- Department of Radiation Oncology, University of Texas Southwestern, Dallas, TX 75390, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO 63110, United States of America
| |
Collapse
|
22
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
23
|
Nie X, Saleh Z, Kadbi M, Zakian K, Deasy J, Rimner A, Li G. A super-resolution framework for the reconstruction of T2-weighted (T2w) time-resolved (TR) 4DMRI using T1w TR-4DMRI as the guidance. Med Phys 2020; 47:3091-3102. [PMID: 32166757 DOI: 10.1002/mp.14136] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 01/30/2020] [Accepted: 03/05/2020] [Indexed: 12/25/2022] Open
Abstract
PURPOSE The purpose of this study was to develop T2-weighted (T2w) time-resolved (TR) four-dimensional magnetic resonance imaging (4DMRI) reconstruction technique with higher soft-tissue contrast for multiple breathing cycle motion assessment by building a super-resolution (SR) framework using the T1w TR-4DMRI reconstruction as guidance. METHODS The multi-breath T1w TR-4DMRI was reconstructed by deforming a high-resolution (HR: 2 × 2 × 2 mm3 ) volumetric breath-hold (BH, 20s) three-dimensional magnetic resonance imaging (3DMRI) image to a series of low-resolution (LR: 5 × 5 × 5 mm3 ) 3D cine images at a 2Hz frame rate in free-breathing (FB, 40 s) using an enhanced Demons algorithm, namely [T1BH →FB] reconstruction. Within the same imaging session, respiratory-correlated (RC) T2w 4DMRI (2 × 2 × 2 mm3 ) was acquired based on an internal navigator to gain HR T2w (T2HR ) in three states (full exhalation and mid and full inhalation) in ~5 min. Minor binning artifacts in the RC-4DMRI were automatically identified based on voxel intensity correlation (VIC) between consecutive slices as outliers (VIC < VICmean -σ) and corrected by deforming the artifact slices to interpolated slices from the adjacent slices iteratively until no outliers were identified. A T2HR image with minimal deformation (<1 cm at the diaphragm) from the T1BH image was selected for multi-modal B-Spline deformable image registration (DIR) to establish the T2HR -T1BH voxel correspondence. Two approaches to reconstruct T2w TR-4DMRI were investigated: (A) T2HR →[T1BH →FB]: to deform T2w HR to T1w BH only as T1w TR-4DMRI was reconstructed, and combine the two displacement vector fields (DVFs) to reconstruct T2w TR-4DMRI, and (B) [T2HR ←T1BH ]→FB: to deform T1w BH to T2w HR first and apply the deformed T1w BH to reconstruct T2w TR-4DMRI. The reconstruction times were similar, 8-12 min per volume. To validate the two methods, T2w- and T1w-mapped 4D XCAT digital phantoms were utilized with three synthetic spherical tumors (ϕ = 2.0, 3.0, and 4.0 cm) in the lower or mid lobes as the ground truth to evaluate the tumor location (the center of mass, COM), size (volume ratio, %V), and shape (Dice index). Six lung cancer patients were scanned under an IRB-approved protocol and the T2w TR-4DMRI images reconstructed from the two methods were compared based on the preservation of the three tumor characteristics. The local tumor-contained image quality was also characterized using the VIC and structure similarity (SSIM) indexes. RESULTS In the 4D digital phantom, excellent tumor alignment after T2HR -T1HR DIR is achieved: ∆COM = 0.8 ± 0.5 mm, %V = 1.06 ± 0.02, and Dice = 0.91 ± 0.03, in both deformation directions using the DIR-target image as the reference. In patients, binning artifacts are corrected with improved image quality: average VIC increases from 0.92 ± 0.03 to 0.95 ± 0.01. Both T2w TR-4DMRI reconstruction methods produce similar tumor alignment errors ∆COM = 2.9 ± 0.6 mm. However, method B ([T2HR ←T1BH ]→FB) produces superior results in preserving more T2w tumor features with a higher %V = 0.99 ± 0.03, Dice = 0.81 ± 0.06, VIC = 0.85 ± 0.06, and SSIM = 0.65 ± 0.10 in the T2w TR-4DMRI images. CONCLUSIONS This study has demonstrated the feasibility of T2w TR-4DMRI reconstruction with high soft-tissue contrast and adequately-preserved tumor position, size, and shape in multiple breathing cycles. The T2w-centric DIR (method B) produces a superior solution for the SR-based framework of T2w TR-4DMRI reconstruction with highly preserved tumor characteristics and local image features, which are useful for tumor delineation and motion management in radiation therapy.
Collapse
Affiliation(s)
- Xingyu Nie
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Ziad Saleh
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Mo Kadbi
- Philips Healthcare, MR Therapy, Cleveland, OH, USA
| | - Kristen Zakian
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Joseph Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Guang Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
24
|
Chong JJR. Deep-Learning Super-Resolution MRI: Getting Something From Nothing. J Magn Reson Imaging 2019; 51:1140-1141. [PMID: 31587413 DOI: 10.1002/jmri.26939] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 09/09/2019] [Indexed: 01/01/2023] Open
Affiliation(s)
- Jaron J R Chong
- The Department of Radiology, McGill University, Montreal, Québec, Canada
| |
Collapse
|
25
|
Kim T, Park JC, Gach HM, Chun J, Mutic S. Technical Note: Real‐time 3D MRI in the presence of motion for MRI‐guided radiotherapy: 3D Dynamic keyhole imaging with super‐resolution. Med Phys 2019; 46:4631-4638. [DOI: 10.1002/mp.13748] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 06/21/2019] [Accepted: 07/22/2019] [Indexed: 12/11/2022] Open
Affiliation(s)
- Taeho Kim
- Department of Radiation Oncology Washington University School of Medicine St Louis MO 63110USA
| | - Justin C. Park
- Department of Radiation Oncology Washington University School of Medicine St Louis MO 63110USA
| | - H. Michael Gach
- Department of Radiation Oncology Washington University School of Medicine St Louis MO 63110USA
- Department of Radiology and Biomedical Engineering Washington University in St. Louis St Louis MO 63110USA
| | - Jaehee Chun
- Department of Radiation Oncology Yonsei University College of Medicine Seoul 03722South Korea
| | - Sasa Mutic
- Department of Radiation Oncology Washington University School of Medicine St Louis MO 63110USA
| |
Collapse
|