1
|
Kuang X, Li B, Lyu T, Xue Y, Huang H, Xie Q, Zhu W. PET image reconstruction using weighted nuclear norm maximization and deep learning prior. Phys Med Biol 2024; 69:215023. [PMID: 39374634 DOI: 10.1088/1361-6560/ad841d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
The ill-posed Positron emission tomography (PET) reconstruction problem usually results in limited resolution and significant noise. Recently, deep neural networks have been incorporated into PET iterative reconstruction framework to improve the image quality. In this paper, we propose a new neural network-based iterative reconstruction method by using weighted nuclear norm (WNN) maximization, which aims to recover the image details in the reconstruction process. The novelty of our method is the application of WNN maximization rather than WNN minimization in PET image reconstruction. Meanwhile, a neural network is used to control the noise originated from WNN maximization. Our method is evaluated on simulated and clinical datasets. The simulation results show that the proposed approach outperforms state-of-the-art neural network-based iterative methods by achieving the best contrast/noise tradeoff with a remarkable contrast improvement on the lesion contrast recovery. The study on clinical datasets also demonstrates that our method can recover lesions of different sizes while suppressing noise in various low-dose PET image reconstruction tasks. Our code is available athttps://github.com/Kuangxd/PETReconstruction.
Collapse
Affiliation(s)
- Xiaodong Kuang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Tianling Lyu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Yitian Xue
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Hailiang Huang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Qingguo Xie
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Wentao Zhu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
2
|
Zhang Q, Hu Y, Zhao Y, Cheng J, Fan W, Hu D, Shi F, Cao S, Zhou Y, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Deep Generalized Learning Model for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:122-134. [PMID: 37428658 DOI: 10.1109/tmi.2023.3293836] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Low-count positron emission tomography (PET) imaging is challenging because of the ill-posedness of this inverse problem. Previous studies have demonstrated that deep learning (DL) holds promise for achieving improved low-count PET image quality. However, almost all data-driven DL methods suffer from fine structure degradation and blurring effects after denoising. Incorporating DL into the traditional iterative optimization model can effectively improve its image quality and recover fine structures, but little research has considered the full relaxation of the model, resulting in the performance of this hybrid model not being sufficiently exploited. In this paper, we propose a learning framework that deeply integrates DL and an alternating direction of multipliers method (ADMM)-based iterative optimization model. The innovative feature of this method is that we break the inherent forms of the fidelity operators and use neural networks to process them. The regularization term is deeply generalized. The proposed method is evaluated on simulated data and real data. Both the qualitative and quantitative results show that our proposed neural network method can outperform partial operator expansion-based neural network methods, neural network denoising methods and traditional methods.
Collapse
|
3
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
4
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
5
|
Eliminating CT radiation for clinical PET examination using deep learning. Eur J Radiol 2022; 154:110422. [DOI: 10.1016/j.ejrad.2022.110422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 04/27/2022] [Accepted: 06/20/2022] [Indexed: 11/19/2022]
|
6
|
Zhou H, Liu X, Wang H, Chen Q, Wang R, Pang ZF, Zhang Y, Hu Z. The synthesis of high-energy CT images from low-energy CT images using an improved cycle generative adversarial network. Quant Imaging Med Surg 2022; 12:28-42. [PMID: 34993058 DOI: 10.21037/qims-21-182] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Accepted: 07/02/2021] [Indexed: 12/14/2022]
Abstract
Background The dose of radiation a patient receives when undergoing dual-energy computed tomography (CT) is of significant concern to the medical community, and balancing the tradeoffs between the level of radiation used and the quality of CT images is challenging. This paper proposes a method of synthesizing high-energy CT (HECT) images from low-energy CT (LECT) images using a neural network that achieves an alternative to HECT scanning by employing an LECT scan, which greatly reduces the radiation dose a patient receives. Methods In the training phase, the proposed structure cyclically generates HECT and LECT images to improve the accuracy of extracting edge and texture features. Specifically, we combine multiple connection methods with channel attention (CA) and pixel attention (PA) mechanisms to improve the network's mapping ability of image features. In the prediction phase, we use a model consisting of only the network component that synthesizes HECT images from LECT images. Results Our proposed method was conducted on clinical hip CT image data sets from Guizhou Provincial People's Hospital. In a comparison with other available methods [a generative adversarial network (GAN), a residual encoder-to-decoder network with a visual geometry group (VGG) pretrained model (RED-VGG), a Wasserstein GAN (WGAN), and CycleGAN] in terms of metrics of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), normalized mean square error (NMSE), and a visual effect evaluation, the proposed method was found to perform better on each of these evaluation criteria. Compared with the results produced by CycleGAN, the proposed method improved the PSNR by 2.44%, the SSIM by 1.71%, and the NMSE by 15.2%. Furthermore, the differences in the statistical indicators are statistically significant, proving the strength of the proposed method. Conclusions The proposed method synthesizes high-energy CT images from low-energy CT images, which significantly reduces both the cost of treatment and the radiation dose received by patients. Based on both image quality score metrics and visual effects comparisons, the results of the proposed method are superior to those obtained by other methods.
Collapse
Affiliation(s)
- Haojie Zhou
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,College of Software, Henan University, Kaifeng, China
| | - Xinfeng Liu
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qihang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Zhi-Feng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Yong Zhang
- Department of Orthopaedic, Shenzhen University General Hospital, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
7
|
Ouyang Z, Zhao S, Cheng Z, Duan Y, Chen Z, Zhang N, Liang D, Hu Z. Dynamic PET Imaging Using Dual Texture Features. Front Comput Neurosci 2022; 15:819840. [PMID: 35069162 PMCID: PMC8782430 DOI: 10.3389/fncom.2021.819840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 12/10/2021] [Indexed: 11/16/2022] Open
Abstract
Purpose: This study aims to explore the impact of adding texture features in dynamic positron emission tomography (PET) reconstruction of imaging results. Methods: We have improved a reconstruction method that combines radiological dual texture features. In this method, multiple short time frames are added to obtain composite frames, and the image reconstructed by composite frames is used as the prior image. We extract texture features from prior images by using the gray level-gradient cooccurrence matrix (GGCM) and gray-level run length matrix (GLRLM). The prior information contains the intensity of the prior image, the inverse difference moment of the GGCM and the long-run low gray-level emphasis of the GLRLM. Results: The computer simulation results show that, compared with the traditional maximum likelihood, the proposed method obtains a higher signal-to-noise ratio (SNR) in the image obtained by dynamic PET reconstruction. Compared with similar methods, the proposed algorithm has a better normalized mean squared error (NMSE) and contrast recovery coefficient (CRC) at the tumor in the reconstructed image. Simulation studies on clinical patient images show that this method is also more accurate for reconstructing high-uptake lesions. Conclusion: By adding texture features to dynamic PET reconstruction, the reconstructed images are more accurate at the tumor.
Collapse
Affiliation(s)
- Zhanglei Ouyang
- School of Physics, Zhengzhou University, Zhengzhou, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Shujun Zhao
- School of Physics, Zhengzhou University, Zhengzhou, China
| | - Zhaoping Cheng
- Department of PET/CT, The First Affiliated Hospital of Shandong First Medical University, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Yanhua Duan
- Department of PET/CT, The First Affiliated Hospital of Shandong First Medical University, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
8
|
Efficient Strike Artifact Reduction Based on 3D-Morphological Structure Operators from Filtered Back-Projection PET Images. SENSORS 2021; 21:s21217228. [PMID: 34770534 PMCID: PMC8587286 DOI: 10.3390/s21217228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 10/26/2021] [Accepted: 10/28/2021] [Indexed: 11/16/2022]
Abstract
Positron emission tomography (PET) can provide functional images and identify abnormal metabolic regions of the whole-body to effectively detect tumor presence and distribution. The filtered back-projection (FBP) algorithm is one of the most common images reconstruction methods. However, it will generate strike artifacts on the reconstructed image and affect the clinical diagnosis of lesions. Past studies have shown reduction in strike artifacts and improvement in quality of images by two-dimensional morphological structure operators (2D-MSO). The morphological structure method merely processes the noise distribution of 2D space and never considers the noise distribution of 3D space. This study was designed to develop three-dimensional-morphological structure operators (3D MSO) for nuclear medicine imaging and effectively eliminating strike artifacts without reducing image quality. A parallel operation was also used to calculate the minimum background standard deviation of the images for three-dimensional morphological structure operators with the optimal response curve (3D-MSO/ORC). As a result of Jaszczak phantom and rat verification, 3D-MSO/ORC showed better denoising performance and image quality than the 2D-MSO method. Thus, 3D MSO/ORC with a 3 × 3 × 3 mask can reduce noise efficiently and provide stability in FBP images.
Collapse
|
9
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
10
|
Gao D, Zhang X, Zhou C, Fan W, Zeng T, Yang Q, Yuan J, He Q, Liang D, Liu X, Yang Y, Zheng H, Hu Z. MRI-aided kernel PET image reconstruction method based on texture features. Phys Med Biol 2021; 66. [PMID: 34192685 DOI: 10.1088/1361-6560/ac1024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 06/30/2021] [Indexed: 11/11/2022]
Abstract
We investigate the reconstruction of low-count positron emission tomography (PET) projection, which is an important, but challenging, task. Using the texture feature extraction method of radiomics, i.e. the gray-level co-occurrence matrix (GLCM), texture features can be extracted from magnetic resonance imaging images with high-spatial resolution. In this work, we propose a kernel reconstruction method combining autocorrelation texture features derived from the GLCM. The new kernel function includes the correlations of both the intensity and texture features from the prior image. By regarding the GLCM as a discrete approximation of a probability density function, the asymptotically gray-level-invariant autocorrelation texture feature is generated, which can maintain the accuracy of texture features extracted from small image regions by reducing the number of quantized image gray levels. A computer simulation shows that the proposed method can effectively reduce the noise in the reconstructed image compared to the maximum likelihood expectation maximum method and improve the image quality and tumor region accuracy compared to the original kernel method for low-count PET reconstruction. A simulation study on clinical patient images also shows that the proposed method can improve the whole image quality and that the reconstruction of a high-uptake lesion is more accurate than that achieved by the original kernel method.
Collapse
Affiliation(s)
- Dongfang Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Tianyi Zeng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Qian Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Qiang He
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen 518055, People's Republic of China
| |
Collapse
|
11
|
Wang H, Huang Z, Zhang Q, Gao D, OuYang Z, Liang D, Liu X, Yang Y, Zheng H, Hu Z. Technical note: A preliminary study of dual-tracer PET image reconstruction guided by FDG and/or MR kernels. Med Phys 2021; 48:5259-5271. [PMID: 34252216 DOI: 10.1002/mp.15089] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Clinically, single radiotracer positron emission tomography (PET) imaging is a commonly used examination method; however, since each radioactive tracer reflects the information of only one kind of cell, it easily causes false negatives or false positives in disease diagnosis. Therefore, reasonably combining two or more radiotracers is recommended to improve the accuracy of diagnosis and the sensitivity and specificity of the disease when conditions permit. METHODS This paper proposes incorporating 18 F-fluorodeoxyglucose (FDG) as a higher-quality PET image to guide the reconstruction of other lower-count 11 C-methionine (MET) PET datasets to compensate for the lower image quality by a popular kernel algorithm. Specifically, the FDG prior is needed to extract kernel features, and these features were used to build a kernel matrix using a k-nearest-neighbor (kNN) search for MET image reconstruction. We created a 2-D brain phantom to validate the proposed method by simulating sinogram data containing Poisson random noise and quantitatively compared the performance of the proposed FDG-guided kernelized expectation maximization (KEM) method with the performance of Gaussian and non-local means (NLM) smoothed maximum likelihood expectation maximization (MLEM), MR-guided KEM, and multi-guided-S KEM algorithms. Mismatch experiments between FDG/MR and MET data were also carried out to investigate the outcomes of possible clinical situations. RESULTS In the simulation study, the proposed method outperformed the other algorithms by at least 3.11% in the signal-to-noise ratio (SNR) and 0.68% in the contrast recovery coefficient (CRC), and it reduced the mean absolute error (MAE) by 8.07%. Regarding the tumor in the reconstructed image, the proposed method contained more pathological information. Furthermore, the proposed method was still superior to the MR-guided KEM method in the mismatch experiments. CONCLUSIONS The proposed FDG-guided KEM algorithm can effectively utilize and compensate for the tissue metabolism information obtained from dual-tracer PET to maximize the advantages of PET imaging.
Collapse
Affiliation(s)
- Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dongfang Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanglei OuYang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| |
Collapse
|
12
|
Jiang C, Zhang X, Zhang N, Zhang Q, Zhou C, Yuan J, He Q, Yang Y, Liu X, Zheng H, Fan W, Hu Z, Liang D. Synthesizing PET/MR (T1-weighted) images from non-attenuation-corrected PET images. Phys Med Biol 2021; 66. [PMID: 34098534 DOI: 10.1088/1361-6560/ac08b2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/07/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) imaging can be used for early detection, diagnosis and postoperative patient monitoring of many diseases. Traditional PET imaging requires not only additional computed tomography (CT) imaging or magnetic resonance imaging (MR) to provide anatomical information but also attenuation correction (AC) map calculation based on CT images or MR images for accurate quantitative estimation. During a patient's treatment, PET/CT or PET/MR scans are inevitably repeated many times, leading to additional doses of ionizing radiation (CT scans) and additional economic and time costs (MR scans). To reduce adverse effects while obtaining high-quality PET/MR images in the course of a patient's treatment, especially in the stage of evaluating the effect of postoperative treatment, in this work, we propose a new method based on deep learning, which can directly obtain synthetic attenuation-corrected PET (sAC PET) and synthetic T1-weighted MR (sMR) images based only on non-attenuation-corrected PET (NAC PET) images. Our model, based on the Wasserstein generative adversarial network, first removes noise and artifacts from the NAC PET images to generate sAC PET images and then generates sMR images from the obtained sAC PET images. To evaluate the performance of this generative model, we evaluated it on paired PET/MR images from a total of eighty clinical patients. Based on qualitative and quantitative analysis, the generated sAC PET and sMR images showed a high degree of similarity to the real AC PET and real MR images. These results indicated that our proposed method can reduce the frequency of additional anatomical imaging scans during PET imaging and has great potential in improving doctors' clinical diagnosis efficiency, saving patients' economic expenditure and reducing the radiation risk brought by CT scanning.
Collapse
Affiliation(s)
- Changhui Jiang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China.,National Innovation Center for Advanced Medical Devices, Shenzhen 518131, People's Republic of China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Qiang He
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
13
|
Mao X, Zhao S, Gao D, Hu Z, Zhang N. Direct and indirect parameter imaging methods for dynamic PET. Biomed Phys Eng Express 2021; 7. [PMID: 34087810 DOI: 10.1088/2057-1976/ac086c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 06/04/2021] [Indexed: 01/04/2023]
Abstract
The method of reconstructing parametric images from dynamic positron emission tomography (PET) data with the linear Patlak model has been widely used in scientific research and clinical practice. Whether for direct or indirect image reconstruction, researchers have deeply investigated the associated methods and effects. Among the existing methods, the traditional maximum likelihood expectation maximization (MLEM) reconstruction algorithm is fast but produces a substantial amount of noise. If the parameter images obtained by the MLEM algorithm are postfiltered, a large amount of image edge information is lost. Additionally, although the kernel method has a better noise reduction effect, its calculation costs are very high due to the complexity of the algorithm. Therefore, to obtain parametric images with a high signal-to-noise ratio (SNR) and good retention of detailed information, here, we use guided kernel means (GKM) and dynamic PET image information to conduct guided filtering and perform parametric image reconstruction. We apply this method to direct and indirect reconstruction, and through computer simulations, we show that our proposed method has higher identifiability and a greater SNR than conventional direct and indirect reconstruction methods. We also show that our method produces better images with direct than with indirect reconstruction.
Collapse
Affiliation(s)
- Xin Mao
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou 450001, People's Republic of China.,Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Shujun Zhao
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou 450001, People's Republic of China
| | - Dongfang Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
14
|
Fu J, Feng F, Quan H, Wan Q, Chen Z, Liu X, Zheng H, Liang D, Cheng G, Hu Z. PWLS-PR: low-dose computed tomography image reconstruction using a patch-based regularization method based on the penalized weighted least squares total variation approach. Quant Imaging Med Surg 2021; 11:2541-2559. [PMID: 34079722 PMCID: PMC8107320 DOI: 10.21037/qims-20-963] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 02/01/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND Radiation exposure computed tomography (CT) scans and the associated risk of cancer in patients have been major clinical concerns. Existing research can achieve low-dose CT imaging by reducing the X-ray current and the number of projections per rotation of the human body. However, this method may produce excessive noise and fringe artifacts in the traditional filtered back projection (FBP)-reconstructed image. METHODS To solve this problem, iterative image reconstruction is a promising option to obtain high-quality images from low-dose scans. This paper proposes a patch-based regularization method based on penalized weighted least squares total variation (PWLS-PR) for iterative image reconstruction. This method uses neighborhood patches instead of single pixels to calculate the nonquadratic penalty. The proposed regularization method is more robust than the conventional regularization method in identifying random fluctuations caused by sharp edges and noise. Each iteration of the proposed algorithm can be described in the following three steps: image updating via the total variation based on penalized weighted least squares (PWLS-TV), image smoothing, and pixel-by-pixel image fusion. RESULTS Simulation and real-world projection experiments show that the proposed PWLS-PR algorithm achieves a higher image reconstruction performance than similar algorithms. Through the qualitative and quantitative evaluation of simulation experiments, the effectiveness of the method is also verified. CONCLUSIONS Furthermore, this study shows that the PWLS-PR method reduces the amount of projection data required for repeated CT scans and has the useful potential to reduce the radiation dose in clinical medical applications.
Collapse
Affiliation(s)
- Jing Fu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Electrical and Information Engineering, Hunan University, Changsha, China
| | - Fei Feng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Huimin Quan
- College of Electrical and Information Engineering, Hunan University, Changsha, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Guanxun Cheng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
15
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
16
|
Xue H, Zhang Q, Zou S, Zhang W, Zhou C, Tie C, Wan Q, Teng Y, Li Y, Liang D, Liu X, Yang Y, Zheng H, Zhu X, Hu Z. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks. Quant Imaging Med Surg 2021; 11:749-762. [PMID: 33532274 PMCID: PMC7779905 DOI: 10.21037/qims-20-66] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Accepted: 09/25/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Reducing the radiation tracer dose and scanning time during positron emission tomography (PET) imaging can reduce the cost of the tracer, reduce motion artifacts, and increase the efficiency of the scanner. However, the reconstructed images to be noisy. It is very important to reconstruct high-quality images with low-count (LC) data. Therefore, we propose a deep learning method called LCPR-Net, which is used for directly reconstructing full-count (FC) PET images from corresponding LC sinogram data. METHODS Based on the framework of a generative adversarial network (GAN), we enforce a cyclic consistency constraint on the least-squares loss to establish a nonlinear end-to-end mapping process from LC sinograms to FC images. In this process, we merge a convolutional neural network (CNN) and a residual network for feature extraction and image reconstruction. In addition, the domain transform (DT) operation sends a priori information to the cycle-consistent GAN (CycleGAN) network, avoiding the need for a large amount of computational resources to learn this transformation. RESULTS The main advantages of this method are as follows. First, the network can use LC sinogram data as input to directly reconstruct an FC PET image. The reconstruction speed is faster than that provided by model-based iterative reconstruction. Second, reconstruction based on the CycleGAN framework improves the quality of the reconstructed image. CONCLUSIONS Compared with other state-of-the-art methods, the quantitative and qualitative evaluation results show that the proposed method is accurate and effective for FC PET image reconstruction.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Changjun Tie
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yongchang Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
17
|
DPIR-Net: Direct PET Image Reconstruction Based on the Wasserstein Generative Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2995717] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Zeng T, Gao J, Gao D, Kuang Z, Sang Z, Wang X, Hu L, Chen Q, Chu X, Liang D, Liu X, Yang Y, Zheng H, Hu Z. A GPU-accelerated fully 3D OSEM image reconstruction for a high-resolution small animal PET scanner using dual-ended readout detectors. ACTA ACUST UNITED AC 2020; 65:245007. [DOI: 10.1088/1361-6560/aba6f9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
19
|
Hu Z, Li Y, Zou S, Xue H, Sang Z, Liu X, Yang Y, Zhu X, Liang D, Zheng H. Obtaining PET/CT images from non-attenuation corrected PET images in a single PET system using Wasserstein generative adversarial networks. ACTA ACUST UNITED AC 2020; 65:215010. [DOI: 10.1088/1361-6560/aba5e9] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|