1
|
Zhang Y, Wang X, Zhu P, Lu X, Xiao J, Zhou W, Li Z, Peng X. PRN: progressive reasoning network and its image completion applications. Sci Rep 2024; 14:23519. [PMID: 39384878 PMCID: PMC11464508 DOI: 10.1038/s41598-024-72368-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 09/06/2024] [Indexed: 10/11/2024] Open
Abstract
Ancient murals embody profound historical, cultural, scientific, and artistic values, yet many are afflicted with challenges such as pigment shedding or missing parts. While deep learning-based completion techniques have yielded remarkable results in restoring natural images, their application to damaged murals has been unsatisfactory due to data shifts and limited modeling efficacy. This paper proposes a novel progressive reasoning network designed specifically for mural image completion, inspired by the mural painting process. The proposed network comprises three key modules: a luminance reasoning module, a sketch reasoning module, and a color fusion module. The first two modules are based on the double-codec framework, designed to infer missing areas' luminance and sketch information. The final module then utilizes a paired-associate learning approach to reconstruct the color image. This network utilizes two parallel, complementary pathways to estimate the luminance and sketch maps of a damaged mural. Subsequently, these two maps are combined to synthesize a complete color image. Experimental results indicate that the proposed network excels in restoring clearer structures and more vivid colors, surpassing current state-of-the-art methods in both quantitative and qualitative assessments for repairing damaged images. Our code and results will be publicly accessible at https://github.com/albestobe/PRN .
Collapse
Affiliation(s)
- Yongqin Zhang
- School of Archaeology and Cultural Heritage, Zhengzhou University, Zhengzhou, 450001, China.
| | - Xiaoyu Wang
- School of Information Science and Technology, Northwest University, Xi'an, 710127, China
| | - Panpan Zhu
- School of Information Science and Technology, Northwest University, Xi'an, 710127, China
| | - Xuan Lu
- Information and Data Department, Shaanxi History Museum, Xi'an, 710061, China
| | - Jinsheng Xiao
- Electronic Information School, Wuhan University, Wuhan, 430072, China
| | - Wei Zhou
- School of Information Science and Technology, Northwest University, Xi'an, 710127, China
| | - Zhan Li
- School of Information Science and Technology, Northwest University, Xi'an, 710127, China
| | - Xianlin Peng
- Art School, Northwest University, Xi'an, 710127, China
| |
Collapse
|
2
|
Iqbal MS, Belal Bin Heyat M, Parveen S, Ammar Bin Hayat M, Roshanzamir M, Alizadehsani R, Akhtar F, Sayeed E, Hussain S, Hussein HS, Sawan M. Progress and trends in neurological disorders research based on deep learning. Comput Med Imaging Graph 2024; 116:102400. [PMID: 38851079 DOI: 10.1016/j.compmedimag.2024.102400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 06/10/2024]
Abstract
In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis-a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University of Azad Jammu & Kashmir, Bagh, Pakistan.
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| | - Saba Parveen
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China.
| | | | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, Fasa, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, VIC 3216, Australia.
| | - Faijan Akhtar
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.
| | - Eram Sayeed
- Kisan Inter College, Dhaurahara, Kushinagar, India.
| | - Sadiq Hussain
- Department of Examination, Dibrugarh University, Assam 786004, India.
| | - Hany S Hussein
- Electrical Engineering Department, Faculty of Engineering, King Khalid University, Abha 61411, Saudi Arabia; Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81528, Egypt.
| | - Mohamad Sawan
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| |
Collapse
|
3
|
Kang SK, Heo M, Chung JY, Kim D, Shin SA, Choi H, Chung A, Ha JM, Kim H, Lee JS. Clinical Performance Evaluation of an Artificial Intelligence-Powered Amyloid Brain PET Quantification Method. Nucl Med Mol Imaging 2024; 58:246-254. [PMID: 38932756 PMCID: PMC11196433 DOI: 10.1007/s13139-024-00861-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 04/05/2024] [Accepted: 04/09/2024] [Indexed: 06/28/2024] Open
Abstract
Purpose This study assesses the clinical performance of BTXBrain-Amyloid, an artificial intelligence-powered software for quantifying amyloid uptake in brain PET images. Methods 150 amyloid brain PET images were visually assessed by experts and categorized as negative and positive. Standardized uptake value ratio (SUVR) was calculated with cerebellum grey matter as the reference region, and receiver operating characteristic (ROC) and precision-recall (PR) analysis for BTXBrain-Amyloid were conducted. For comparison, same image processing and analysis was performed using Statistical Parametric Mapping (SPM) program. In addition, to evaluate the spatial normalization (SN) performance, mutual information (MI) between MRI template and spatially normalized PET images was calculated and SPM group analysis was conducted. Results Both BTXBrain and SPM methods discriminated between negative and positive groups. However, BTXBrain exhibited lower SUVR standard deviation (0.06 and 0.21 for negative and positive, respectively) than SPM method (0.11 and 0.25). In ROC analysis, BTXBrain had an AUC of 0.979, compared to 0.959 for SPM, while PR curves showed an AUC of 0.983 for BTXBrain and 0.949 for SPM. At the optimal cut-off, the sensitivity and specificity were 0.983 and 0.921 for BTXBrain and 0.917 and 0.921 for SPM12, respectively. MI evaluation also favored BTXBrain (0.848 vs. 0.823), indicating improved SN. In SPM group analysis, BTXBrain exhibited higher sensitivity in detecting basal ganglia differences between negative and positive groups. Conclusion BTXBrain-Amyloid outperformed SPM in clinical performance evaluation, also demonstrating superior SN and improved detection of deep brain differences. These results suggest the potential of BTXBrain-Amyloid as a valuable tool for clinical amyloid PET image evaluation.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Mina Heo
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Ji Yeon Chung
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| | - Ari Chung
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Jung-Min Ha
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Hoowon Kim
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| |
Collapse
|
4
|
Nam JG, Kang SK, Choi H, Hong W, Park J, Goo JM, Lee JS, Park CM. Sixty-four-fold data reduction of chest radiographs using a super-resolution convolutional neural network. Br J Radiol 2024; 97:632-639. [PMID: 38265235 PMCID: PMC11027241 DOI: 10.1093/bjr/tqae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 11/13/2023] [Accepted: 01/10/2024] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To develop and validate a super-resolution (SR) algorithm generating clinically feasible chest radiographs from 64-fold reduced data. METHODS An SR convolutional neural network was trained to produce original-resolution images (output) from 64-fold reduced images (input) using 128 × 128 patches (n = 127 030). For validation, 112 radiographs-including those with pneumothorax (n = 17), nodules (n = 20), consolidations (n = 18), and ground-glass opacity (GGO; n = 16)-were collected. Three image sets were prepared: the original images and those reconstructed using SR and conventional linear interpolation (LI) using 64-fold reduced data. The mean-squared error (MSE) was calculated to measure similarity between the reconstructed and original images, and image noise was quantified. Three thoracic radiologists evaluated the quality of each image and decided whether any abnormalities were present. RESULTS The SR-images were more similar to the original images than the LI-reconstructed images (MSE: 9269 ± 1015 vs. 9429 ± 1057; P = .02). The SR-images showed lower measured noise and scored better noise level by three radiologists than both original and LI-reconstructed images (Ps < .01). The radiologists' pooled sensitivity with the SR-reconstructed images was not significantly different compared with the original images for detecting pneumothorax (SR vs. original, 90.2% [46/51] vs. 96.1% [49/51]; P = .19), nodule (90.0% [54/60] vs. 85.0% [51/60]; P = .26), consolidation (100% [54/54] vs. 96.3% [52/54]; P = .50), and GGO (91.7% [44/48] vs. 95.8% [46/48]; P = .69). CONCLUSIONS SR-reconstructed chest radiographs using 64-fold reduced data showed a lower noise level than the original images, with equivalent sensitivity for detecting major abnormalities. ADVANCES IN KNOWLEDGE This is the first study applying super-resolution in data reduction of chest radiographs.
Collapse
Affiliation(s)
- Ju Gang Nam
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Artificial Intelligence Collaborative Network, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | | | - Hyewon Choi
- Department of Radiology, Chung-Ang University Hospital and College of Medicine, Seoul 06973, Republic of Korea
| | - Wonju Hong
- Department of Radiology, Hallym University Sacred Heart Hospital, Anyang 14068, Republic of Korea
| | - Jongsoo Park
- Department of Radiology, Yeungnam University Medical Center, Daegu 42415, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc, Seoul 04782, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
- Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
| |
Collapse
|
5
|
Hossain MB, Shinde RK, Oh S, Kwon KC, Kim N. A Systematic Review and Identification of the Challenges of Deep Learning Techniques for Undersampled Magnetic Resonance Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2024; 24:753. [PMID: 38339469 PMCID: PMC10856856 DOI: 10.3390/s24030753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 01/05/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024]
Abstract
Deep learning (DL) in magnetic resonance imaging (MRI) shows excellent performance in image reconstruction from undersampled k-space data. Artifact-free and high-quality MRI reconstruction is essential for ensuring accurate diagnosis, supporting clinical decision-making, enhancing patient safety, facilitating efficient workflows, and contributing to the validity of research studies and clinical trials. Recently, deep learning has demonstrated several advantages over conventional MRI reconstruction methods. Conventional methods rely on manual feature engineering to capture complex patterns and are usually computationally demanding due to their iterative nature. Conversely, DL methods use neural networks with hundreds of thousands of parameters and automatically learn relevant features and representations directly from the data. Nevertheless, there are some limitations to DL-based techniques concerning MRI reconstruction tasks, such as the need for large, labeled datasets, the possibility of overfitting, and the complexity of model training. Researchers are striving to develop DL models that are more efficient, adaptable, and capable of providing valuable information for medical practitioners. We provide a comprehensive overview of the current developments and clinical uses by focusing on state-of-the-art DL architectures and tools used in MRI reconstruction. This study has three objectives. Our main objective is to describe how various DL designs have changed over time and talk about cutting-edge tactics, including their advantages and disadvantages. Hence, data pre- and post-processing approaches are assessed using publicly available MRI datasets and source codes. Secondly, this work aims to provide an extensive overview of the ongoing research on transformers and deep convolutional neural networks for rapid MRI reconstruction. Thirdly, we discuss several network training strategies, like supervised, unsupervised, transfer learning, and federated learning for rapid and efficient MRI reconstruction. Consequently, this article provides significant resources for future improvement of MRI data pre-processing and fast image reconstruction.
Collapse
Affiliation(s)
- Md. Biddut Hossain
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea; (M.B.H.); (R.K.S.)
| | - Rupali Kiran Shinde
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea; (M.B.H.); (R.K.S.)
| | - Sukhoon Oh
- Research Equipment Operation Department, Korea Basic Science Institute, Cheongju-si 28119, Chungcheongbuk-do, Republic of Korea;
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea; (M.B.H.); (R.K.S.)
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea; (M.B.H.); (R.K.S.)
| |
Collapse
|
6
|
Xu K, Li T, Khan MS, Gao R, Antic SL, Huo Y, Sandler KL, Maldonado F, Landman BA. Body composition assessment with limited field-of-view computed tomography: A semantic image extension perspective. Med Image Anal 2023; 88:102852. [PMID: 37276799 PMCID: PMC10527087 DOI: 10.1016/j.media.2023.102852] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/30/2023] [Accepted: 05/23/2023] [Indexed: 06/07/2023]
Abstract
Field-of-view (FOV) tissue truncation beyond the lungs is common in routine lung screening computed tomography (CT). This poses limitations for opportunistic CT-based body composition (BC) assessment as key anatomical structures are missing. Traditionally, extending the FOV of CT is considered as a CT reconstruction problem using limited data. However, this approach relies on the projection domain data which might not be available in application. In this work, we formulate the problem from the semantic image extension perspective which only requires image data as inputs. The proposed two-stage method identifies a new FOV border based on the estimated extent of the complete body and imputes missing tissues in the truncated region. The training samples are simulated using CT slices with complete body in FOV, making the model development self-supervised. We evaluate the validity of the proposed method in automatic BC assessment using lung screening CT with limited FOV. The proposed method effectively restores the missing tissues and reduces BC assessment error introduced by FOV tissue truncation. In the BC assessment for large-scale lung screening CT datasets, this correction improves both the intra-subject consistency and the correlation with anthropometric approximations. The developed method is available at https://github.com/MASILab/S-EFOV.
Collapse
Affiliation(s)
- Kaiwen Xu
- Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States.
| | - Thomas Li
- Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States
| | - Mirza S Khan
- Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United States
| | - Riqiang Gao
- Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States
| | - Sanja L Antic
- Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United States
| | - Yuankai Huo
- Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States
| | - Kim L Sandler
- Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United States
| | - Fabien Maldonado
- Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United States
| | - Bennett A Landman
- Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States; Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United States
| |
Collapse
|
7
|
Elliott ML, Hanford LC, Hamadeh A, Hilbert T, Kober T, Dickerson BC, Mair RW, Eldaief MC, Buckner RL. Brain morphometry in older adults with and without dementia using extremely rapid structural scans. Neuroimage 2023; 276:120173. [PMID: 37201641 PMCID: PMC10330834 DOI: 10.1016/j.neuroimage.2023.120173] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 04/25/2023] [Accepted: 05/15/2023] [Indexed: 05/20/2023] Open
Abstract
T1-weighted structural MRI is widely used to measure brain morphometry (e.g., cortical thickness and subcortical volumes). Accelerated scans as fast as one minute or less are now available but it is unclear if they are adequate for quantitative morphometry. Here we compared the measurement properties of a widely adopted 1.0 mm resolution scan from the Alzheimer's Disease Neuroimaging Initiative (ADNI = 5'12'') with two variants of highly accelerated 1.0 mm scans (compressed-sensing, CSx6 = 1'12''; and wave-controlled aliasing in parallel imaging, WAVEx9 = 1'09'') in a test-retest study of 37 older adults aged 54 to 86 (including 19 individuals diagnosed with a neurodegenerative dementia). Rapid scans produced highly reliable morphometric measures that largely matched the quality of morphometrics derived from the ADNI scan. Regions of lower reliability and relative divergence between ADNI and rapid scan alternatives tended to occur in midline regions and regions with susceptibility-induced artifacts. Critically, the rapid scans yielded morphometric measures similar to the ADNI scan in regions of high atrophy. The results converge to suggest that, for many current uses, extremely rapid scans can replace longer scans. As a final test, we explored the possibility of a 0'49'' 1.2 mm CSx6 structural scan, which also showed promise. Rapid structural scans may benefit MRI studies by shortening the scan session and reducing cost, minimizing opportunity for movement, creating room for additional scan sequences, and allowing for the repetition of structural scans to increase precision of the estimates.
Collapse
Affiliation(s)
- Maxwell L Elliott
- Department of Psychology, Center for Brain Science, Harvard University, 52 Oxford Street, Northwest Laboratory 280.10, Cambridge, MA 02138, USA.
| | - Lindsay C Hanford
- Department of Psychology, Center for Brain Science, Harvard University, 52 Oxford Street, Northwest Laboratory 280.10, Cambridge, MA 02138, USA
| | - Aya Hamadeh
- Baylor College of Medicine, Houston, TX 77030, USA
| | - Tom Hilbert
- Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Lausanne, Switzerland; Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tobias Kober
- Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Lausanne, Switzerland; Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Bradford C Dickerson
- Frontotemporal Disorders Unit, Massachusetts General Hospital, USA; Alzheimer's Disease Research Center, Massachusetts General Hospital, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA; Department of Neurology, Massachusetts General Hospital & Harvard Medical School, USA; Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| | - Ross W Mair
- Department of Psychology, Center for Brain Science, Harvard University, 52 Oxford Street, Northwest Laboratory 280.10, Cambridge, MA 02138, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA
| | - Mark C Eldaief
- Frontotemporal Disorders Unit, Massachusetts General Hospital, USA; Alzheimer's Disease Research Center, Massachusetts General Hospital, USA; Department of Neurology, Massachusetts General Hospital & Harvard Medical School, USA; Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| | - Randy L Buckner
- Department of Psychology, Center for Brain Science, Harvard University, 52 Oxford Street, Northwest Laboratory 280.10, Cambridge, MA 02138, USA; Alzheimer's Disease Research Center, Massachusetts General Hospital, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, USA; Department of Psychiatry, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA 02129, USA
| |
Collapse
|
8
|
Kim KM, Lee MS, Suh MS, Cheon GJ, Lee JS. Voxel-Based Internal Dosimetry for 177Lu-Labeled Radiopharmaceutical Therapy Using Deep Residual Learning. Nucl Med Mol Imaging 2023; 57:94-102. [PMID: 36998593 PMCID: PMC10043146 DOI: 10.1007/s13139-022-00769-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 06/28/2022] [Accepted: 08/05/2022] [Indexed: 11/26/2022] Open
Abstract
Purpose In this study, we propose a deep learning (DL)-based voxel-based dosimetry method in which dose maps acquired using the multiple voxel S-value (VSV) approach were used for residual learning. Methods Twenty-two SPECT/CT datasets from seven patients who underwent 177Lu-DOTATATE treatment were used in this study. The dose maps generated from Monte Carlo (MC) simulations were used as the reference approach and target images for network training. The multiple VSV approach was used for residual learning and compared with dose maps generated from deep learning. The conventional 3D U-Net network was modified for residual learning. The absorbed doses in the organs were calculated as the mass-weighted average of the volume of interest (VOI). Results The DL approach provided a slightly more accurate estimation than the multiple-VSV approach, but the results were not statistically significant. The single-VSV approach yielded a relatively inaccurate estimation. No significant difference was noted between the multiple VSV and DL approach on the dose maps. However, this difference was prominent in the error maps. The multiple VSV and DL approach showed a similar correlation. In contrast, the multiple VSV approach underestimated doses in the low-dose range, but it accounted for the underestimation when the DL approach was applied. Conclusion Dose estimation using the deep learning-based approach was approximately equal to that in the MC simulation. Accordingly, the proposed deep learning network is useful for accurate and fast dosimetry after radiation therapy using 177Lu-labeled radiopharmaceuticals.
Collapse
Affiliation(s)
- Keon Min Kim
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon, 34057 Korea
| | - Min Seok Suh
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Gi Jeong Cheon
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Jae Sung Lee
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| |
Collapse
|
9
|
Zeng Q, Feng Z, Zhu Y, Zhang Y, Shu X, Wu A, Luo L, Cao Y, Xiong J, Li H, Zhou F, Jie Z, Tu Y, Li Z. Deep learning model for diagnosing early gastric cancer using preoperative computed tomography images. Front Oncol 2022; 12:1065934. [PMID: 36531076 PMCID: PMC9748811 DOI: 10.3389/fonc.2022.1065934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 11/07/2022] [Indexed: 08/10/2023] Open
Abstract
BACKGROUND Early gastric cancer (EGC) is defined as a lesion restricted to the mucosa or submucosa, independent of size or evidence of regional lymph node metastases. Although computed tomography (CT) is the main technique for determining the stage of gastric cancer (GC), the accuracy of CT for determining tumor invasion of EGC was still unsatisfactory by radiologists. In this research, we attempted to construct an AI model to discriminate EGC in portal venous phase CT images. METHODS We retrospectively collected 658 GC patients from the first affiliated hospital of Nanchang university, and divided them into training and internal validation cohorts with a ratio of 8:2. As the external validation cohort, 93 GC patients were recruited from the second affiliated hospital of Soochow university. We developed several prediction models based on various convolutional neural networks, and compared their predictive performance. RESULTS The deep learning model based on the ResNet101 neural network represented sufficient discrimination of EGC. In two validation cohorts, the areas under the curves (AUCs) for the receiver operating characteristic (ROC) curves were 0.993 (95% CI: 0.984-1.000) and 0.968 (95% CI: 0.935-1.000), respectively, and the accuracy was 0.946 and 0.914. Additionally, the deep learning model can also differentiate between mucosa and submucosa tumors of EGC. CONCLUSIONS These results suggested that deep learning classifiers have the potential to be used as a screening tool for EGC, which is crucial in the individualized treatment of EGC patients.
Collapse
Affiliation(s)
- Qingwen Zeng
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
- Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zongfeng Feng
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Yanyan Zhu
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Yang Zhang
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Xufeng Shu
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Ahao Wu
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Lianghua Luo
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Yi Cao
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Jianbo Xiong
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Hong Li
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Zhigang Jie
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Yi Tu
- Department of Pathology, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Zhengrong Li
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
10
|
Dong J, Zhang Y, Meng Y, Yang T, Ma W, Wu H. Segmentation Algorithm of Magnetic Resonance Imaging Glioma under Fully Convolutional Densely Connected Convolutional Networks. Stem Cells Int 2022; 2022:8619690. [PMID: 36299467 PMCID: PMC9592238 DOI: 10.1155/2022/8619690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 08/22/2022] [Accepted: 09/26/2022] [Indexed: 11/18/2022] Open
Abstract
This work focused on the application value of magnetic resonance imaging (MRI) image segmentation algorithm based on fully convolutional DenseNet neural network (FCDNN) in glioma diagnosis. In this work, based on the fully convolutional DenseNet algorithm, a new MRI image automatic semantic segmentation method cerebral gliomas semantic segmentation network (CGSSNet) was established and was applied to glioma MRI image segmentation by using the BraTS public dataset as research data. Under the same conditions, compare the differences of dice similarity coefficient (DSC), sensitivity, and Hausdroff distance (HD) between this algorithm and other algorithms in MRI image processing. The results showed that the CGSSNet network segmentation algorithm significantly improved the segmentation accuracy of glioma MRI images. In addition, its DSC, sensitivity, and HD values for glioma MRI images were 0.937, 0.811, and 1.201, respectively. Under different iteration times, the DSC, sensitivity, and HD values of the CGSSNet network segmentation algorithm are significantly better than other algorithms. It showed that the CGSSNet model based on the DenseNet can improve the segmentation accuracy of glioma MRI images, and has potential application value in clinical practice.
Collapse
Affiliation(s)
- Jie Dong
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Yueying Zhang
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Yun Meng
- Department of Magnetic Resonance, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Tingxiao Yang
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Wei Ma
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| | - Huixin Wu
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450045, China
| |
Collapse
|
11
|
Pettit RW, Marlatt BB, Corr SJ, Havelka J, Rana A. nnU-Net Deep Learning Method for Segmenting Parenchyma and Determining Liver Volume From Computed Tomography Images. ANNALS OF SURGERY OPEN 2022; 3:e155. [PMID: 36275876 PMCID: PMC9585534 DOI: 10.1097/as9.0000000000000155] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/15/2022] [Indexed: 11/26/2022] Open
Abstract
Background Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need. Objectives To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images. Methods We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate. Results The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set. Conclusions Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
Collapse
Affiliation(s)
- Rowland W. Pettit
- From the Department of Medicine, Baylor College of Medicine, Houston, TX
| | | | - Stuart J. Corr
- Department of Innovation Systems Engineering, Houston Methodist, Houston, TX
- Department of Cardiovascular Surgery, Houston Methodist Hospital, Houston, TX
- Department of Bioengineering, Rice University, Houston, TX
- Department of Biomedical Engineering, University of Houston, Houston, TX
- Swansea University Medical School, Wales, United Kingdom
| | | | - Abbas Rana
- Department of Surgery, Division of Abdominal Transplantation, Baylor College of Medicine, Houston, TX
| |
Collapse
|
12
|
Ye N, Yang Q, Chen Z, Teng C, Liu P, Liu X, Xiong Y, Lin X, Li S, Li X. Classification of Gliomas and Germinomas of the Basal Ganglia by Transfer Learning. Front Oncol 2022; 12:844197. [PMID: 35311111 PMCID: PMC8928458 DOI: 10.3389/fonc.2022.844197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/11/2022] [Indexed: 12/20/2022] Open
Abstract
Background Germ cell tumors (GCTs) are neoplasms derived from reproductive cells, mostly occurring in children and adolescents at 10 to 19 years of age. Intracranial GCTs are classified histologically into germinomas and non-germinomatous germ cell tumors. Germinomas of the basal ganglia are difficult to distinguish based on symptoms or routine MRI images from gliomas, even for experienced neurosurgeons or radiologists. Meanwhile, intracranial germinoma has a lower incidence rate than glioma in children and adults. Therefore, we established a model based on pre-trained ResNet18 with transfer learning to better identify germinomas of the basal ganglia. Methods This retrospective study enrolled 73 patients diagnosed with germinoma or glioma of the basal ganglia. Brain lesions were manually segmented based on both T1C and T2 FLAIR sequences. The T1C sequence was used to build the tumor classification model. A 2D convolutional architecture and transfer learning were implemented. ResNet18 from ImageNet was retrained on the MRI images of our cohort. Class activation mapping was applied for the model visualization. Results The model was trained using five-fold cross-validation, achieving a mean AUC of 0.88. By analyzing the class activation map, we found that the model's attention was focused on the peri-tumoral edema region of gliomas and tumor bulk for germinomas, indicating that differences in these regions may help discriminate these tumors. Conclusions This study showed that the T1C-based transfer learning model could accurately distinguish germinomas from gliomas of the basal ganglia preoperatively.
Collapse
Affiliation(s)
- Ningrong Ye
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Qi Yang
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Ziyan Chen
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Chubei Teng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Peikun Liu
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Xi Liu
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Yi Xiong
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Xuelei Lin
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| | - Shouwei Li
- Department of Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Xuejun Li
- Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
13
|
Simultaneous brain structure segmentation in magnetic resonance images using deep convolutional neural networks. Radiol Phys Technol 2021; 14:358-365. [PMID: 34338999 DOI: 10.1007/s12194-021-00633-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 07/24/2021] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
In brain magnetic resonance imaging (MRI) examinations, rapidly acquired two-dimensional (2D) T1-weighted sagittal slices are typically used to confirm brainstem atrophy and the presence of signals in the posterior pituitary gland. Image segmentation is essential for the automatic evaluation of chronological changes in the brainstem and pituitary gland. Thus, the purpose of our study was to use deep learning to automatically segment internal organs (brainstem, corpus callosum, pituitary, cerebrum, and cerebellum) in midsagittal slices of 2D T1-weighted images. Deep learning for the automatic segmentation of seven regions in the images was accomplished using two different methods: patch-based segmentation and semantic segmentation. The networks used for patch-based segmentation were AlexNet, GoogLeNet, and ResNet50, whereas semantic segmentation was accomplished using SegNet, VGG16-weighted SegNet, and U-Net. The precision and Jaccard index were calculated, and the extraction accuracy of the six convolutional network (DCNN) systems was evaluated. The highest precision (0.974) was obtained with the VGG16-weighted SegNet, and the lowest precision (0.506) was obtained with ResNet50. Based on the data, calculation times, and Jaccard indices obtained in this study, segmentation on a 2D image may be considered a viable and effective approach. We found that the optimal automatic segmentation of organs (brainstem, corpus callosum, pituitary, cerebrum, and cerebellum) on brain sagittal T1-weighted images could be achieved using SegNet with VGG16.
Collapse
|
14
|
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network. ELECTRONICS 2021. [DOI: 10.3390/electronics10151836] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition due to pathologic conditions and the complex structure of facial bones. The aim of this study is to develop an accurate transmission-less attenuation correction method for amyloid-β (Aβ) brain PET studies. We investigated the validity of a deep convolutional neural network trained to produce a CT-derived μ-map (μ-CT) from simultaneously reconstructed activity and attenuation maps using the MLAA (maximum likelihood reconstruction of activity and attenuation) algorithm for Aβ brain PET. The performance of three different structures of U-net models (2D, 2.5D, and 3D) were compared. The U-net models generated less noisy and more uniform μ-maps than MLAA μ-maps. Among the three different U-net models, the patch-based 3D U-net model reduced noise and cross-talk artifacts more effectively. The Dice similarity coefficients between the μ-map generated using 3D U-net and μ-CT in bone and air segments were 0.83 and 0.67. All three U-net models showed better voxel-wise correlation of the μ-maps compared to MLAA. The patch-based 3D U-net model was the best. While the uptake value of MLAA yielded a high percentage error of 20% or more, the uptake value of 3D U-nets yielded the lowest percentage error within 5%. The proposed deep learning approach that requires no transmission data, anatomic image, or atlas/template for PET attenuation correction remarkably enhanced the quantitative accuracy of the simultaneously estimated MLAA μ-maps from Aβ brain PET.
Collapse
|
15
|
Abstract
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ radiation exposure increases and the patient is more likely to move during the PET scan. Recently, various data-driven techniques based on supervised deep neural network learning have made remarkable progress in reducing noise in images. However, these conventional techniques require clean target images that are of limited availability for PET denoising. Therefore, in this study, we utilized the Noise2Noise framework, which requires only noisy image pairs for network training, to reduce the noise in the PET images. A trainable wavelet transform was proposed to improve the performance of the network. The proposed network was fed wavelet-decomposed images consisting of low- and high-pass components. The inverse wavelet transforms of the network output produced denoised images. The proposed Noise2Noise filter with wavelet transforms outperforms the original Noise2Noise method in the suppression of artefacts and preservation of abnormal uptakes. The quantitative analysis of the simulated PET uptake confirms the improved performance of the proposed method compared with the original Noise2Noise technique. In the clinical data, 10 s images filtered with Noise2Noise are virtually equivalent to 300 s images filtered with a 6 mm Gaussian filter. The incorporation of wavelet transforms in Noise2Noise network training results in the improvement of the image contrast. In conclusion, the performance of Noise2Noise filtering for PET images was improved by incorporating the trainable wavelet transform in the self-supervised deep learning framework.
Collapse
|
16
|
Kang SK, Lee JS. Anatomy-guided PET reconstruction using l1bowsher prior. Phys Med Biol 2021; 66. [PMID: 33780912 DOI: 10.1088/1361-6560/abf2f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 03/29/2021] [Indexed: 12/22/2022]
Abstract
Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| |
Collapse
|