1
|
Nakashima M, Fukui R, Sugimoto S, Iguchi T. Deep learning-based approach for acquisition time reduction in ventilation SPECT in patients after lung transplantation. Radiol Phys Technol 2025; 18:47-57. [PMID: 39441494 DOI: 10.1007/s12194-024-00853-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 10/15/2024] [Accepted: 10/16/2024] [Indexed: 10/25/2024]
Abstract
We aimed to evaluate the image quality and diagnostic performance of chronic lung allograft dysfunction (CLAD) with lung ventilation single-photon emission computed tomography (SPECT) images acquired briefly using a convolutional neural network (CNN) in patients after lung transplantation and to explore the feasibility of short acquisition times. We retrospectively identified 93 consecutive lung-transplant recipients who underwent ventilation SPECT/computed tomography (CT). We employed a CNN to distinguish the images acquired in full time from those acquired in a short time. The image quality was evaluated using the structural similarity index (SSIM) loss and normalized mean square error (NMSE). The correlation between functional volume/morphological volume (F/M) ratios of full-time SPECT images and predicted SPECT images was evaluated. Differences in the F/M ratio were evaluated using Bland-Altman plots, and the diagnostic performance was compared using the area under the curve (AUC). The learning curve, obtained using MSE, converged within 100 epochs. The NMSE was significantly lower (P < 0.001) and the SSIM was significantly higher (P < 0.001) for the CNN-predicted SPECT images compared to the short-time SPECT images. The F/M ratio of full-time SPECT images and predicted SPECT images showed a significant correlation (r = 0.955, P < 0.0001). The Bland-Altman plot revealed a bias of -7.90% in the F/M ratio. The AUC values were 0.942 for full-time SPECT images, 0.934 for predicted SPECT images and 0.872 for short-time SPECT images. Our findings suggest that a deep-learning-based approach can significantly curtail the acquisition time of ventilation SPECT, while preserving the image quality and diagnostic accuracy for CLAD.
Collapse
Affiliation(s)
- Masahiro Nakashima
- Division of Radiological Technology, Okayama University Hospital, 2-5-1 Shikatacho, Kitaku, Okayama, 700-8558, Japan.
| | - Ryohei Fukui
- Department of Radiological Technology, Faculty of Health Sciences, Okayama University, 2-5-1 Shikatacho, Kitaku, Okayama, 700-8558, Japan
| | - Seiichiro Sugimoto
- Department of General Thoracic Surgery and Breast and Endocrinological Surgery, Dentistry and Pharmaceutical Sciences, Okayama University Graduate School of Medicine, 2-5-1 Shikatacho, Kitaku, Okayama, 700-8558, Japan
| | - Toshihiro Iguchi
- Department of Radiological Technology, Faculty of Health Sciences, Okayama University, 2-5-1 Shikatacho, Kitaku, Okayama, 700-8558, Japan
| |
Collapse
|
2
|
Csikos C, Barna S, Kovács Á, Czina P, Budai Á, Szoliková M, Nagy IG, Husztik B, Kiszler G, Garai I. AI-Based Noise-Reduction Filter for Whole-Body Planar Bone Scintigraphy Reliably Improves Low-Count Images. Diagnostics (Basel) 2024; 14:2686. [PMID: 39682594 DOI: 10.3390/diagnostics14232686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 11/25/2024] [Accepted: 11/27/2024] [Indexed: 12/18/2024] Open
Abstract
Background/Objectives: Artificial intelligence (AI) is a promising tool for the enhancement of physician workflow and serves to further improve the efficiency of their diagnostic evaluations. This study aimed to assess the performance of an AI-based bone scan noise-reduction filter on noisy, low-count images in a routine clinical environment. Methods: The performance of the AI bone-scan filter (BS-AI filter) in question was retrospectively evaluated on 47 different patients' 99mTc-MDP bone scintigraphy image pairs (anterior- and posterior-view images), which were obtained in such a manner as to represent the diverse characteristics of the general patient population. The BS-AI filter was tested on artificially degraded noisy images-75, 50, and 25% of total counts-which were generated by binominal sampling. The AI-filtered and unfiltered images were concurrently appraised for image quality and contrast by three nuclear medicine physicians. It was also determined whether there was any difference between the lesions seen on the unfiltered and filtered images. For quantitative analysis, an automatic lesion detector (BS-AI annotator) was utilized as a segmentation algorithm. The total number of lesions and their locations as detected by the BS-AI annotator in the BS-AI-filtered low-count images was compared to the total-count filtered images. The total number of pixels labeled as lesions in the filtered low-count images in relation to the number of pixels in the total-count filtered images was also compared to ensure the filtering process did not change lesion sizes significantly. The comparison of pixel numbers was performed using the reduced-count filtered images that contained only those lesions that were detected in the total-count images. Results: Based on visual assessment, observers agreed that image contrast and quality were better in the BS-AI-filtered images, increasing their diagnostic confidence. Similarities in lesion numbers and sites detected by the BS-AI annotator compared to filtered total-count images were 89%, 83%, and 75% for images degraded to counts of 75%, 50%, and 25%, respectively. No significant difference was found in the number of annotated pixels between filtered images with different counts (p > 0.05). Conclusions: Our findings indicate that the BS-AI noise-reduction filter enhances image quality and contrast without loss of vital information. The implementation of this filter in routine diagnostic procedures reliably improves diagnostic confidence in low-count images and elicits a reduction in the administered dose or acquisition time by a minimum of 50% relative to the original dose or acquisition time.
Collapse
Affiliation(s)
- Csaba Csikos
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Gyula Petrányi Doctoral School of Clinical Immunology and Allergology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | - Sándor Barna
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Scanomed Ltd., H-4032 Debrecen, Hungary
| | | | - Péter Czina
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | | | | | - Iván Gábor Nagy
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
| | | | | | - Ildikó Garai
- Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Gyula Petrányi Doctoral School of Clinical Immunology and Allergology, Faculty of Medicine, University of Debrecen, H-4032 Debrecen, Hungary
- Scanomed Ltd., H-4032 Debrecen, Hungary
| |
Collapse
|
3
|
Qi N, Pan B, Meng Q, Yang Y, Ding J, Yuan Z, Gong NJ, Zhao J. Clinical performance of deep learning-enhanced ultrafast whole-body scintigraphy in patients with suspected malignancy. BMC Med Imaging 2024; 24:236. [PMID: 39251959 PMCID: PMC11385493 DOI: 10.1186/s12880-024-01422-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 09/02/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND To evaluate the clinical performance of two deep learning methods, one utilizing real clinical pairs and the other utilizing simulated datasets, in enhancing image quality for two-dimensional (2D) fast whole-body scintigraphy (WBS). METHODS A total of 83 patients with suspected bone metastasis were retrospectively enrolled. All patients underwent single-photon emission computed tomography (SPECT) WBS at speeds of 20 cm/min (1x), 40 cm/min (2x), and 60 cm/min (3x). Two deep learning models were developed to generate high-quality images from real and simulated fast scans, designated 2x-real and 3x-real (images from real fast data) and 2x-simu and 3x-simu (images from simulated fast data), respectively. A 5-point Likert scale was used to evaluate the image quality of each acquisition. Accuracy, sensitivity, specificity, and the area under the curve (AUC) were used to evaluate diagnostic efficacy. Learned perceptual image patch similarity (LPIPS) and the Fréchet inception distance (FID) were used to assess image quality. Additionally, the count-level consistency of WBS was compared between the two models. RESULTS Subjective assessments revealed that the 1x images had the highest general image quality (Likert score: 4.40 ± 0.45). The 2x-real, 2x-simu and 3x-real, 3x-simu images demonstrated significantly better quality than the 2x and 3x images (Likert scores: 3.46 ± 0.47, 3.79 ± 0.55 vs. 2.92 ± 0.41, P < 0.0001; 2.69 ± 0.40, 2.61 ± 0.41 vs. 1.36 ± 0.51, P < 0.0001), respectively. Notably, the quality of the 2x-real images was inferior to that of the 2x-simu images (Likert scores: 3.46 ± 0.47 vs. 3.79 ± 0.55, P = 0.001). The diagnostic efficacy for the 2x-real and 2x-simu images was indistinguishable from that of the 1x images (accuracy: 81.2%, 80.7% vs. 84.3%; sensitivity: 77.27%, 77.27% vs. 87.18%; specificity: 87.18%, 84.63% vs. 87.18%. All P > 0.05), whereas the diagnostic efficacy for the 3x-real and 3x-simu was better than that for the 3x images (accuracy: 65.1%, 66.35% vs. 59.0%; sensitivity: 63.64%, 63.64% vs. 64.71%; specificity: 66.67%, 69.23% vs. 55.1%. All P < 0.05). Objectively, both the real and simulated models achieved significantly enhanced image quality from the accelerated scans in the 2x and 3x groups (FID: 0.15 ± 0.18, 0.18 ± 0.18 vs. 0.47 ± 0.34; 0.19 ± 0.23, 0.20 ± 0.22 vs. 0.98 ± 0.59. LPIPS 0.17 ± 0.05, 0.16 ± 0.04 vs. 0.19 ± 0.05; 0.18 ± 0.05, 0.19 ± 0.05 vs. 0.23 ± 0.04. All P < 0.05). The count-level consistency with the 1x images was excellent for all four sets of model-generated images (P < 0.0001). CONCLUSIONS Ultrafast 2x speed (real and simulated) images achieved comparable diagnostic value to that of standardly acquired images, but the simulation algorithm does not necessarily reflect real data.
Collapse
Affiliation(s)
- Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Yihong Yang
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Jie Ding
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Zengbei Yuan
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
| | - Nan-Jie Gong
- Tsinghua Cross-Strait Research Institute, Laboratory of Intelligent Medical Imaging, Beijing, China.
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China.
| |
Collapse
|
4
|
Pan Z, Qi N, Meng Q, Pan B, Feng T, Zhao J, Gong NJ. Fast SPECT/CT planar bone imaging enabled by deep learning enhancement. Med Phys 2024; 51:5414-5426. [PMID: 38652084 DOI: 10.1002/mp.17094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/21/2024] [Accepted: 03/23/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND The application of deep learning methods in rapid bone scintigraphy is increasingly promising for minimizing the duration of SPECT examinations. Recent works showed several deep learning models based on simulated data for the synthesis of high-count bone scintigraphy images from low-count counterparts. Few studies have been conducted and validated on real clinical pairs due to the misalignment inherent in multiple scan procedures. PURPOSE To generate high quality whole-body bone images from 2× and 3× fast scans using deep learning based enhancement method. MATERIALS AND METHODS Seventy-six cases who underwent whole-body bone scans were enrolled in this prospective study. All patients went through a standard scan at a speed of 20 cm/min, which followed by fast scans consisting of 2× and 3× accelerations at speeds of 40 and 60 cm/min. A content-attention image restoration approach based on Residual-in-Residual Dense Block (RRDB) is introduced to effectively recover high-quality images from fast scans with fine-details and less noise. Our approach is robust with misalignment introduced from patient's metabolism, and shows valid count-level consistency. Learned Perceptual Image Patch Similarity (LPIPS) and Fréchet Inception Distance (FID) are employed in evaluating the similarity to the standard bone images. To further prove our method practical in clinical settings, image quality of the anonymous images was evaluated by two experienced nuclear physicians on a 5-point Likert scale (5 = excellent) . RESULTS The proposed method reaches the state-of-the-art performance on FID and LPIPS with 0.583 and 0.176 for 2× fast scans and 0.583 and 0.185 for 3× fast scans. Clinic evaluation further demonstrated the restored images had a significant improvement compared to fast scan in image quality, technetium 99m-methyl diphosphonate (Tc-99 m MDP) distribution, artifacts, and diagnostic confidence. CONCLUSIONS Our method was validated for accelerating whole-body bone scans by introducing real clinical data. Confirmed by nuclear medicine physicians, the proposed method can effectively enhance image diagnostic value, demonstrating potential for efficient high-quality fast bone imaging in practical settings.
Collapse
Affiliation(s)
| | - Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Tao Feng
- Laboratory for Intelligent Medical Imaging, Tsinghua Cross-Strait Research Institute, Beijing, China
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Nan-Jie Gong
- Laboratory for Intelligent Medical Imaging, Tsinghua Cross-Strait Research Institute, Beijing, China
| |
Collapse
|
5
|
Murata T, Hashimoto T, Onoguchi M, Shibutani T, Iimori T, Sawada K, Umezawa T, Masuda Y, Uno T. Verification of image quality improvement of low-count bone scintigraphy using deep learning. Radiol Phys Technol 2024; 17:269-279. [PMID: 38336939 DOI: 10.1007/s12194-023-00776-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 02/12/2024]
Abstract
To improve image quality for low-count bone scintigraphy using deep learning and evaluate their clinical applicability. Six hundred patients (training, 500; validation, 50; evaluation, 50) were included in this study. Low-count original images (75%, 50%, 25%, 10%, and 5% counts) were generated from reference images (100% counts) using Poisson resampling. Output (DL-filtered) images were obtained after training with U-Net using reference images as teacher data. Gaussian-filtered images were generated for comparison. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) to the reference image were calculated to determine image quality. Artificial neural network (ANN) value, bone scan index (BSI), and number of hotspots (Hs) were computed using BONENAVI analysis to assess diagnostic performance. Accuracy of bone metastasis detection and area under the curve (AUC) were calculated. PSNR and SSIM for DL-filtered images were highest in all count percentages. BONENAVI analysis values for DL-filtered images did not differ significantly, regardless of the presence or absence of bone metastases. BONENAVI analysis values for original and Gaussian-filtered images differed significantly at ≦25% counts in patients without bone metastases. In patients with bone metastases, BSI and Hs for original and Gaussian-filtered images differed significantly at ≦10% counts, whereas ANN values did not. The accuracy of bone metastasis detection was highest for DL-filtered images in all count percentages; the AUC did not differ significantly. The deep learning method improved image quality and bone metastasis detection accuracy for low-count bone scintigraphy, suggesting its clinical applicability.
Collapse
Affiliation(s)
- Taisuke Murata
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takuma Hashimoto
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Masahisa Onoguchi
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan.
| | - Takayuki Shibutani
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takashi Iimori
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Koichi Sawada
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Tetsuro Umezawa
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Yoshitada Masuda
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| |
Collapse
|