1
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
2
|
Rofena A, Guarrasi V, Sarli M, Piccolo CL, Sammarra M, Zobel BB, Soda P. A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography. Comput Med Imaging Graph 2024; 116:102398. [PMID: 38810487 DOI: 10.1016/j.compmedimag.2024.102398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 05/07/2024] [Accepted: 05/07/2024] [Indexed: 05/31/2024]
Abstract
Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model's performance, also exploiting radiologists' assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.
Collapse
Affiliation(s)
- Aurora Rofena
- Unit of Computer Systems & Bioinformatics, Department of Engineering University Campus Bio-Medico, Rome, Italy
| | - Valerio Guarrasi
- Unit of Computer Systems & Bioinformatics, Department of Engineering University Campus Bio-Medico, Rome, Italy
| | - Marina Sarli
- Department of Radiology, Fondazione Policlinico Campus Bio-Medico, Rome, Italy
| | | | - Matteo Sammarra
- Department of Radiology, Fondazione Policlinico Campus Bio-Medico, Rome, Italy
| | - Bruno Beomonte Zobel
- Department of Radiology, Fondazione Policlinico Campus Bio-Medico, Rome, Italy; Department of Radiology, University Campus Bio-Medico, Rome, Italy
| | - Paolo Soda
- Unit of Computer Systems & Bioinformatics, Department of Engineering University Campus Bio-Medico, Rome, Italy; Department of Radiation Sciences, Radiation Physics, Biomedical Engineering, Umeå University, Sweden.
| |
Collapse
|
3
|
Emoto T, Nagayama Y, Takada S, Sakabe D, Shigematsu S, Goto M, Nakato K, Yoshida R, Harai R, Kidoh M, Oda S, Nakaura T, Hirai T. Super-resolution deep-learning reconstruction for cardiac CT: impact of radiation dose and focal spot size on task-based image quality. Phys Eng Sci Med 2024; 47:1001-1014. [PMID: 38884668 DOI: 10.1007/s13246-024-01423-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 04/04/2024] [Indexed: 06/18/2024]
Abstract
This study aimed to evaluate the impact of radiation dose and focal spot size on the image quality of super-resolution deep-learning reconstruction (SR-DLR) in comparison with iterative reconstruction (IR) and normal-resolution DLR (NR-DLR) algorithms for cardiac CT. Catphan-700 phantom was scanned on a 320-row scanner at six radiation doses (small and large focal spots at 1.4-4.3 and 5.8-8.8 mGy, respectively). Images were reconstructed using hybrid-IR, model-based-IR, NR-DLR, and SR-DLR algorithms. Noise properties were evaluated through plotting noise power spectrum (NPS). Spatial resolution was quantified with task-based transfer function (TTF); Polystyrene, Delrin, and Bone-50% inserts were used for low-, intermediate, and high-contrast spatial resolution. The detectability index (d') was calculated. Image noise, noise texture, edge sharpness of low- and intermediate-contrast objects, delineation of fine high-contrast objects, and overall quality of four reconstructions were visually ranked. Results indicated that among four reconstructions, SR-DLR yielded the lowest noise magnitude and NPS peak, as well as the highest average NPS frequency, TTF50%, d' values, and visual rank at each radiation dose. For all reconstructions, the intermediate- to high-contrast spatial resolution was maximized at 4.3 mGy, while the lowest noise magnitude and highest d' were attained at 8.8 mGy. SR-DLR at 4.3 mGy exhibited superior noise performance, intermediate- to high-contrast spatial resolution, d' values, and visual rank compared to the other reconstructions at 8.8 mGy. Therefore, SR-DLR may yield superior diagnostic image quality and facilitate radiation dose reduction compared to the other reconstructions, particularly when combined with small focal spot scanning.
Collapse
Affiliation(s)
- Takafumi Emoto
- Department of Central Radiology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Yasunori Nagayama
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan.
| | - Sentaro Takada
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Daisuke Sakabe
- Department of Central Radiology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Shinsuke Shigematsu
- Department of Central Radiology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Makoto Goto
- Department of Central Radiology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Kengo Nakato
- Department of Central Radiology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Ryuya Yoshida
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Ryota Harai
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Masafumi Kidoh
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Seitaro Oda
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Toshinori Hirai
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| |
Collapse
|
4
|
Su Y, Ang LM, Seng KP, Smith J. Deep Learning and Neural Architecture Search for Optimizing Binary Neural Network Image Super Resolution. Biomimetics (Basel) 2024; 9:369. [PMID: 38921249 PMCID: PMC11202081 DOI: 10.3390/biomimetics9060369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/11/2024] [Accepted: 06/15/2024] [Indexed: 06/27/2024] Open
Abstract
The evolution of super-resolution (SR) technology has seen significant advancements through the adoption of deep learning methods. However, the deployment of such models by resource-constrained devices necessitates models that not only perform efficiently, but also conserve computational resources. Binary neural networks (BNNs) offer a promising solution by minimizing the data precision to binary levels, thus reducing the computational complexity and memory requirements. However, for BNNs, an effective architecture is essential due to their inherent limitations in representing information. Designing such architectures traditionally requires extensive computational resources and time. With the advancement in neural architecture search (NAS), differentiable NAS has emerged as an attractive solution for efficiently crafting network structures. In this paper, we introduce a novel and efficient binary network search method tailored for image super-resolution tasks. We adapt the search space specifically for super resolution to ensure it is optimally suited for the requirements of such tasks. Furthermore, we incorporate Libra Parameter Binarization (Libra-PB) to maximize information retention during forward propagation. Our experimental results demonstrate that the network structures generated by our method require only a third of the parameters, compared to conventional methods, and yet deliver comparable performance.
Collapse
Affiliation(s)
- Yuanxin Su
- XJTLU Entrepreneur College (Taicang), Xi’an Jiaotong Liverpool University, Taicang 215400, China
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK
| | - Li-minn Ang
- School of Science, Technology and Engineering, University of the Sunshine Coast, Moreton Bay, QLD 4502, Australia
| | - Kah Phooi Seng
- XJTLU Entrepreneur College (Taicang), Xi’an Jiaotong Liverpool University, Taicang 215400, China
- School of Science, Technology and Engineering, University of the Sunshine Coast, Moreton Bay, QLD 4502, Australia
| | - Jeremy Smith
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK
| |
Collapse
|
5
|
Shin M, Seo M, Lee K, Yoon K. Super-resolution techniques for biomedical applications and challenges. Biomed Eng Lett 2024; 14:465-496. [PMID: 38645589 PMCID: PMC11026337 DOI: 10.1007/s13534-024-00365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/12/2024] [Accepted: 02/18/2024] [Indexed: 04/23/2024] Open
Abstract
Super-resolution (SR) techniques have revolutionized the field of biomedical applications by detailing the structures at resolutions beyond the limits of imaging or measuring tools. These techniques have been applied in various biomedical applications, including microscopy, magnetic resonance imaging (MRI), computed tomography (CT), X-ray, electroencephalogram (EEG), ultrasound, etc. SR methods are categorized into two main types: traditional non-learning-based methods and modern learning-based approaches. In both applications, SR methodologies have been effectively utilized on biomedical images, enhancing the visualization of complex biological structures. Additionally, these methods have been employed on biomedical data, leading to improvements in computational precision and efficiency for biomedical simulations. The use of SR techniques has resulted in more detailed and accurate analyses in diagnostics and research, essential for early disease detection and treatment planning. However, challenges such as computational demands, data interpretation complexities, and the lack of unified high-quality data persist. The article emphasizes these issues, underscoring the need for ongoing development in SR technologies to further improve biomedical research and patient care outcomes.
Collapse
Affiliation(s)
- Minwoo Shin
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Minjee Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyunghyun Lee
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyungho Yoon
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| |
Collapse
|
6
|
Fok WYR, Fieselmann A, Herbst M, Ritschl L, Kappler S, Saalfeld S. Deep learning in computed tomography super resolution using multi-modality data training. Med Phys 2024; 51:2846-2860. [PMID: 37972365 DOI: 10.1002/mp.16825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 10/25/2023] [Accepted: 10/25/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND One of the limitations in leveraging the potential of artificial intelligence in X-ray imaging is the limited availability of annotated training data. As X-ray and CT shares similar imaging physics, one could achieve cross-domain data sharing, so to generate labeled synthetic X-ray images from annotated CT volumes as digitally reconstructed radiographs (DRRs). To account for the lower resolution of CT and the CT-generated DRRs as compared to the real X-ray images, we propose the use of super-resolution (SR) techniques to enhance the CT resolution before DRR generation. PURPOSE As spatial resolution can be defined by the modulation transfer function kernel in CT physics, we propose to train a SR network using paired low-resolution (LR) and high-resolution (HR) images by varying the kernel's shape and cutoff frequency. This is different to previous deep learning-based SR techniques on RGB and medical images which focused on refining the sampling grid. Instead of generating LR images by bicubic interpolation, we aim to create realistic multi-detector CT (MDCT) like LR images from HR cone-beam CT (CBCT) scans. METHODS We propose and evaluate the use of a SR U-Net for the mapping between LR and HR CBCT image slices. We reconstructed paired LR and HR training volumes from the same CT scans with small in-plane sampling grid size of0.20 × 0.20 mm 2 $0.20 \times 0.20 \, {\rm mm}^2$ . We used the residual U-Net architecture to train two models. SRUNR e s K $^K_{Res}$ : trained with kernel-based LR images, and SRUNR e s I $^I_{Res}$ : trained with bicubic downsampled data as baseline. Both models are trained on one CBCT dataset (n = 13 391). The performance of both models was then evaluated on unseen kernel-based and interpolation-based LR CBCT images (n = 10 950), and also on MDCT images (n = 1392). RESULTS Five-fold cross validation and ablation study were performed to find the optimal hyperparameters. Both SRUNR e s K $^K_{Res}$ and SRUNR e s I $^I_{Res}$ models show significant improvements (p-value < $<$ 0.05) in mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index measures (SSIMs) on unseen CBCT images. Also, the improvement percentages in MAE, PSNR, and SSIM by SRUNR e s K $^K_{Res}$ is larger than SRUNR e s I $^I_{Res}$ . For SRUNR e s K $^K_{Res}$ , MAE is reduced by 14%, and PSNR and SSIMs increased by 6 and 8%, respectively. To conclude, SRUNR e s K $^K_{Res}$ outperforms SRUNR e s I $^I_{Res}$ , which the former generates sharper images when tested with kernel-based LR CBCT images as well as cross-modality LR MDCT data. CONCLUSIONS Our proposed method showed better performance than the baseline interpolation approach on unseen LR CBCT. We showed that the frequency behavior of the used data is important for learning the SR features. Additionally, we showed cross-modality resolution improvements to LR MDCT images. Our approach is, therefore, a first and essential step in enabling realistic high spatial resolution CT-generated DRRs for deep learning training.
Collapse
Affiliation(s)
- Wai Yan Ryana Fok
- X-ray Products, Siemens Healthcare GmbH, Forchheim, Germany
- Faculty of Computer Science, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany
| | | | | | - Ludwig Ritschl
- X-ray Products, Siemens Healthcare GmbH, Forchheim, Germany
| | | | - Sylvia Saalfeld
- Computational Medicine Group, Ilmenau University of Technology, Ilmenau, Germany
- Research Campus STIMULATE, Otto-von-Guericke University of Magdeburg, Magdeburg, Germany
| |
Collapse
|
7
|
Nam JG, Kang SK, Choi H, Hong W, Park J, Goo JM, Lee JS, Park CM. Sixty-four-fold data reduction of chest radiographs using a super-resolution convolutional neural network. Br J Radiol 2024; 97:632-639. [PMID: 38265235 PMCID: PMC11027241 DOI: 10.1093/bjr/tqae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 11/13/2023] [Accepted: 01/10/2024] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To develop and validate a super-resolution (SR) algorithm generating clinically feasible chest radiographs from 64-fold reduced data. METHODS An SR convolutional neural network was trained to produce original-resolution images (output) from 64-fold reduced images (input) using 128 × 128 patches (n = 127 030). For validation, 112 radiographs-including those with pneumothorax (n = 17), nodules (n = 20), consolidations (n = 18), and ground-glass opacity (GGO; n = 16)-were collected. Three image sets were prepared: the original images and those reconstructed using SR and conventional linear interpolation (LI) using 64-fold reduced data. The mean-squared error (MSE) was calculated to measure similarity between the reconstructed and original images, and image noise was quantified. Three thoracic radiologists evaluated the quality of each image and decided whether any abnormalities were present. RESULTS The SR-images were more similar to the original images than the LI-reconstructed images (MSE: 9269 ± 1015 vs. 9429 ± 1057; P = .02). The SR-images showed lower measured noise and scored better noise level by three radiologists than both original and LI-reconstructed images (Ps < .01). The radiologists' pooled sensitivity with the SR-reconstructed images was not significantly different compared with the original images for detecting pneumothorax (SR vs. original, 90.2% [46/51] vs. 96.1% [49/51]; P = .19), nodule (90.0% [54/60] vs. 85.0% [51/60]; P = .26), consolidation (100% [54/54] vs. 96.3% [52/54]; P = .50), and GGO (91.7% [44/48] vs. 95.8% [46/48]; P = .69). CONCLUSIONS SR-reconstructed chest radiographs using 64-fold reduced data showed a lower noise level than the original images, with equivalent sensitivity for detecting major abnormalities. ADVANCES IN KNOWLEDGE This is the first study applying super-resolution in data reduction of chest radiographs.
Collapse
Affiliation(s)
- Ju Gang Nam
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Artificial Intelligence Collaborative Network, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | | | - Hyewon Choi
- Department of Radiology, Chung-Ang University Hospital and College of Medicine, Seoul 06973, Republic of Korea
| | - Wonju Hong
- Department of Radiology, Hallym University Sacred Heart Hospital, Anyang 14068, Republic of Korea
| | - Jongsoo Park
- Department of Radiology, Yeungnam University Medical Center, Daegu 42415, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc, Seoul 04782, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital and College of Medicine, Seoul 03080, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
- Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul 03080, Republic of Korea
| |
Collapse
|
8
|
Higaki T. [[CT] 5. Various CT Image Reconstruction Methods Applying Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:112-117. [PMID: 38246633 DOI: 10.6009/jjrt.2024-2309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Affiliation(s)
- Toru Higaki
- Graduate School of Advanced Science and Engineering, Hiroshima University
| |
Collapse
|
9
|
Liu X, Su S, Gu W, Yao T, Shen J, Mo Y. Super-Resolution Reconstruction of CT Images Based on Multi-scale Information Fused Generative Adversarial Networks. Ann Biomed Eng 2024; 52:57-70. [PMID: 38064116 DOI: 10.1007/s10439-023-03412-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 11/16/2023] [Indexed: 01/04/2024]
Abstract
The popularization and widespread use of computed tomography (CT) in the field of medicine evocated public attention to the potential radiation exposure endured by patients. Reducing the radiation dose may lead to scattering noise and low resolution, which can adversely affect the radiologists' judgment. Hence, this paper introduces a new network called PANet-UP-ESRGAN (PAUP-ESRGAN), specifically designed to obtain low-dose CT (LDCT) images with high peak signal-to-noise ratio (PSNR) and high resolution (HR). The model was trained on synthetic medical image data based on a Generative Adversarial Network (GAN). A degradation modeling process was introduced to accurately represent realistic degradation complexities. To reconstruct image edge textures, a pyramidal attention model call PANet was added before the middle of the multiple residual dense blocks (MRDB) in the generator to focus on high-frequency image information. The U-Net discriminator with spectral normalization was also designed to improve its efficiency and stabilize the training dynamics. The proposed PAUP-ESRGAN model was evaluated on the abdomen and lung image datasets, which demonstrated a significant improvement in terms of robustness of model and LDCT image detail reconstruction, compared to the latest real-esrgan network. Results showed that the mean PSNR increated by 19.1%, 25.05%, and 21.25%, the mean SSIM increated by 0.4%, 0.4%, and 0.4%, and the mean NRMSE decreated by 0.25%, 0.25%, and 0.35% at 2[Formula: see text], 4[Formula: see text], and 8[Formula: see text] super-resolution scales, respectively. Experimental results demonstrate that our method outperforms the state-of-the-art super-resolution methods on restoring CT images with respect to peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and normalized root-mean-square error (NRMSE) indices.
Collapse
Affiliation(s)
- Xiaobao Liu
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, No.727, Jingming South Road, Chenggong District, Kunming, 650500, China.
| | - Shuailin Su
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, No.727, Jingming South Road, Chenggong District, Kunming, 650500, China
| | - Wenjuan Gu
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, No.727, Jingming South Road, Chenggong District, Kunming, 650500, China
| | - Tingqiang Yao
- Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, No.727, Jingming South Road, Chenggong District, Kunming, 650500, China
| | - Jihong Shen
- The First Department of Urology, The First Affiliated Hospital of Kunming Medical University, 295 Xichang Road, Chenggong District, Kunming, 650032, China
| | - Yin Mo
- The First Department of Urology, The First Affiliated Hospital of Kunming Medical University, 295 Xichang Road, Chenggong District, Kunming, 650032, China
| |
Collapse
|
10
|
Tang W, Li Z, Zou Y, Liao J, Li B. A multimodal pipeline for image correction and registration of mass spectrometry imaging with microscopy. Anal Chim Acta 2023; 1283:341969. [PMID: 37977791 DOI: 10.1016/j.aca.2023.341969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 10/12/2023] [Accepted: 10/26/2023] [Indexed: 11/19/2023]
Abstract
The integration of matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) and histology plays a pivotal role in advancing our understanding of complex heterogeneous tissues, which provides a comprehensive description of biological tissue with both wide molecule coverage and high lateral resolution. Herein, we proposed a novel strategy for the correction and registration of MALDI MSI data with hematoxylin & eosin (H&E) staining images. To overcome the challenges of discrepancies in spatial resolution towards the unification of the two imaging modalities, a deep learning-based interpolation algorithm for MALDI MSI data was constructed, which enables spatial coherence and the following orientation matching between images. Coupled with the affine transformation (AT) and the subsequent moving least squares algorithm, the two types of images from one rat brain tissue section were aligned automatically with high accuracy. Moreover, we demonstrated the practicality of the developed pipeline by projecting it to a rat cerebral ischemia-reperfusion injury model, which would help decipher the link between molecular metabolism and pathological interpretation towards microregion. This new approach offers the chance for other types of bioimaging to boost the field of multimodal image fusion.
Collapse
Affiliation(s)
- Weiwei Tang
- State Key Laboratory of Natural Medicines and School of Traditional Chinese Pharmacy, China Pharmaceutical University, Nanjing, 210009, China
| | - Zhen Li
- School of Science, China Pharmaceutical University, Nanjing, 211198, China
| | - Yuchen Zou
- State Key Laboratory of Natural Medicines and School of Traditional Chinese Pharmacy, China Pharmaceutical University, Nanjing, 210009, China
| | - Jun Liao
- School of Science, China Pharmaceutical University, Nanjing, 211198, China.
| | - Bin Li
- State Key Laboratory of Natural Medicines and School of Traditional Chinese Pharmacy, China Pharmaceutical University, Nanjing, 210009, China.
| |
Collapse
|
11
|
Ohashi K, Nagatani Y, Yoshigoe M, Iwai K, Tsuchiya K, Hino A, Kida Y, Yamazaki A, Ishida T. Applicability Evaluation of Full-Reference Image Quality Assessment Methods for Computed Tomography Images. J Digit Imaging 2023; 36:2623-2634. [PMID: 37550519 PMCID: PMC10584745 DOI: 10.1007/s10278-023-00875-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/22/2023] [Accepted: 06/23/2023] [Indexed: 08/09/2023] Open
Abstract
Image quality assessments (IQA) are an important task for providing appropriate medical care. Full-reference IQA (FR-IQA) methods, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), are often used to evaluate imaging conditions, reconstruction conditions, and image processing algorithms, including noise reduction and super-resolution technology. However, these IQA methods may be inapplicable for medical images because they were designed for natural images. Therefore, this study aimed to investigate the correlation between objective assessment by some FR-IQA methods and human subjective assessment for computed tomography (CT) images. For evaluation, 210 distorted images were created from six original images using two types of degradation: noise and blur. We employed nine widely used FR-IQA methods for natural images: PSNR, SSIM, feature similarity (FSIM), information fidelity criterion (IFC), visual information fidelity (VIF), noise quality measure (NQM), visual signal-to-noise ratio (VSNR), multi-scale SSIM (MSSSIM), and information content-weighted SSIM (IWSSIM). Six observers performed subjective assessments using the double stimulus continuous quality scale (DSCQS) method. The performance of IQA methods was quantified using Pearson's linear correlation coefficient (PLCC), Spearman rank order correlation coefficient (SROCC), and root-mean-square error (RMSE). Nine FR-IQA methods developed for natural images were all strongly correlated with the subjective assessment (PLCC and SROCC > 0.8), indicating that these methods can apply to CT images. Particularly, VIF had the best values for all three items, PLCC, SROCC, and RMSE. These results suggest that VIF provides the most accurate alternative measure to subjective assessments for CT images.
Collapse
Affiliation(s)
- Kohei Ohashi
- Division of Health Sciences, Osaka University Graduate School of Medicine, Suita, Japan.
- Department of Radiology, Shiga University of Medical Science Hospital, Otsu, Japan.
| | - Yukihiro Nagatani
- Department of Radiology, Shiga University of Medical Science Hospital, Otsu, Japan
| | - Makoto Yoshigoe
- Department of Radiology, Shiga University of Medical Science Hospital, Otsu, Japan
| | - Kyohei Iwai
- Department of Radiology, Shiga University of Medical Science Hospital, Otsu, Japan
| | - Keiko Tsuchiya
- Department of Radiology, Omihachiman Community Medical Center, Omihachiman, Japan
| | - Atsunobu Hino
- Department of Radiology, Nagahama Red Cross Hospital, Nagahama, Japan
| | - Yukako Kida
- Department of Radiology, Shiga University of Medical Science Hospital, Otsu, Japan
| | - Asumi Yamazaki
- Division of Health Sciences, Osaka University Graduate School of Medicine, Suita, Japan
| | - Takayuki Ishida
- Division of Health Sciences, Osaka University Graduate School of Medicine, Suita, Japan
| |
Collapse
|
12
|
Nagayama Y, Emoto T, Kato Y, Kidoh M, Oda S, Sakabe D, Funama Y, Nakaura T, Hayashi H, Takada S, Uchimura R, Hatemura M, Tsujita K, Hirai T. Improving image quality with super-resolution deep-learning-based reconstruction in coronary CT angiography. Eur Radiol 2023; 33:8488-8500. [PMID: 37432405 DOI: 10.1007/s00330-023-09888-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 03/22/2023] [Accepted: 04/23/2023] [Indexed: 07/12/2023]
Abstract
OBJECTIVES To evaluate the effect of super-resolution deep-learning-based reconstruction (SR-DLR) on the image quality of coronary CT angiography (CCTA). METHODS Forty-one patients who underwent CCTA using a 320-row scanner were retrospectively included. Images were reconstructed with hybrid (HIR), model-based iterative reconstruction (MBIR), normal-resolution deep-learning-based reconstruction (NR-DLR), and SR-DLR algorithms. For each image series, image noise, and contrast-to-noise ratio (CNR) at the left main trunk, right coronary artery, left anterior descending artery, and left circumflex artery were quantified. Blooming artifacts from calcified plaques were measured. Image sharpness, noise magnitude, noise texture, edge smoothness, overall quality, and delineation of the coronary wall, calcified and noncalcified plaques, cardiac muscle, and valves were subjectively ranked on a 4-point scale (1, worst; 4, best). The quantitative parameters and subjective scores were compared among the four reconstructions. Task-based image quality was assessed with a physical evaluation phantom. The detectability index for the objects simulating the coronary lumen, calcified plaques, and noncalcified plaques was calculated from the noise power spectrum (NPS) and task-based transfer function (TTF). RESULTS SR-DLR yielded significantly lower image noise and blooming artifacts with higher CNR than HIR, MBIR, and NR-DLR (all p < 0.001). The best subjective scores for all the evaluation criteria were attained with SR-DLR, with significant differences from all other reconstructions (p < 0.001). In the phantom study, SR-DLR provided the highest NPS average frequency, TTF50%, and detectability for all task objects. CONCLUSION SR-DLR considerably improved the subjective and objective image qualities and object detectability of CCTA relative to HIR, MBIR, and NR-DLR algorithms. CLINICAL RELEVANCE STATEMENT The novel SR-DLR algorithm has the potential to facilitate accurate assessment of coronary artery disease on CCTA by providing excellent image quality in terms of spatial resolution, noise characteristics, and object detectability. KEY POINTS • SR-DLR designed for CCTA improved image sharpness, noise property, and delineation of cardiac structures with reduced blooming artifacts from calcified plaques relative to HIR, MBIR, and NR-DLR. • In the task-based image-quality assessments, SR-DLR yielded better spatial resolution, noise property, and detectability for objects simulating the coronary lumen, coronary calcifications, and noncalcified plaques than other reconstruction techniques. • The image reconstruction times of SR-DLR were shorter than those of MBIR, potentially serving as a novel standard-of-care reconstruction technique for CCTA performed on a 320-row CT scanner.
Collapse
Affiliation(s)
- Yasunori Nagayama
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan.
| | - Takafumi Emoto
- Department of Central Radiology, Kumamoto University Hospital, Kumamoto, Japan
| | - Yuki Kato
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Masafumi Kidoh
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Seitaro Oda
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Daisuke Sakabe
- Department of Central Radiology, Kumamoto University Hospital, Kumamoto, Japan
| | - Yoshinori Funama
- Department of Medical Radiation Sciences, Faculty of Life Sciences, Kumamoto University, Kumamoto, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Hidetaka Hayashi
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Sentaro Takada
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Ryutaro Uchimura
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| | - Masahiro Hatemura
- Department of Central Radiology, Kumamoto University Hospital, Kumamoto, Japan
| | - Kenichi Tsujita
- Department of Cardiovascular Medicine, Graduate School of Medical Sciences, Kumamoto University, Kumamoto, Japan
| | - Toshinori Hirai
- Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto University, 1-1-1, Honjo, Chuo-Ku, Kumamoto, 860-8556, Japan
| |
Collapse
|
13
|
Choi HS, Kim JS, Whangbo TK, Eun SJ. Improved Detection of Urolithiasis Using High-Resolution Computed Tomography Images by a Vision Transformer Model. Int Neurourol J 2023; 27:S99-103. [PMID: 38048824 DOI: 10.5213/inj.2346292.146] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 11/11/2023] [Indexed: 12/06/2023] Open
Abstract
PURPOSE Urinary stones cause lateral abdominal pain and are a prevalent condition among younger age groups. The diagnosis typically involves assessing symptoms, conducting physical examinations, performing urine tests, and utilizing radiological imaging. Artificial intelligence models have demonstrated remarkable capabilities in detecting stones. However, due to insufficient datasets, the performance of these models has not reached a level suitable for practical application. Consequently, this study introduces a vision transformer (ViT)-based pipeline for detecting urinary stones, using computed tomography images with augmentation. METHODS The super-resolution convolutional neural network (SRCNN) model was employed to enhance the resolution of a given dataset, followed by data augmentation using CycleGAN. Subsequently, the ViT model facilitated the detection and classification of urinary tract stones. The model's performance was evaluated using accuracy, precision, and recall as metrics. RESULTS The deep learning model based on ViT showed superior performance compared to other existing models. Furthermore, the performance increased with the size of the backbone model. CONCLUSION The study proposes a way to utilize medical data to improve the diagnosis of urinary tract stones. SRCNN was used for data preprocessing to enhance resolution, while CycleGAN was utilized for data augmentation. The ViT model was utilized for stone detection, and its performance was validated through metrics such as accuracy, sensitivity, specificity, and the F1 score. It is anticipated that this research will aid in the early diagnosis and treatment of urinary tract stones, thereby improving the efficiency of medical personnel.
Collapse
Affiliation(s)
- Hyoung Sun Choi
- Department of Computer Science, Gachon University, Seongnam, Korea
| | - Jae Seoung Kim
- Health IT Research Center, Gachon University Gil Medical Center, Incheon, Korea
| | | | - Sung Jong Eun
- Digital Health Industry Team, National IT Industry Promotion Agency, Jincheon, Korea
| |
Collapse
|
14
|
Chan TJ, Rajapakse CS. A Super-Resolution Diffusion Model for Recovering Bone Microstructure from CT Images. Radiol Artif Intell 2023; 5:e220251. [PMID: 38074790 PMCID: PMC10698592 DOI: 10.1148/ryai.220251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 08/02/2023] [Accepted: 09/01/2023] [Indexed: 02/12/2024]
Abstract
Purpose To use a diffusion-based deep learning model to recover bone microstructure from low-resolution images of the proximal femur, a common site of traumatic osteoporotic fractures. Materials and Methods Training and testing data in this retrospective study consisted of high-resolution cadaveric micro-CT scans (n = 26), which served as ground truth. The images were downsampled prior to use for model training. The model was used to increase spatial resolution in these low-resolution images threefold, from 0.72 mm to 0.24 mm, sufficient to visualize bone microstructure. Model performance was validated using microstructural metrics and finite element simulation-derived stiffness of trabecular regions. Performance was also evaluated across a handful of image quality assessment metrics. Correlations between model performance and ground truth were assessed using intraclass correlation coefficients (ICCs) and Pearson correlation coefficients. Results Compared with popular deep learning baselines, the proposed model exhibited greater accuracy (mean ICC of proposed model, 0.92 vs ICC of next best method, 0.83) and lower bias (mean difference in means, 3.80% vs 10.00%, respectively) across the physiologic metrics. Two gradient-based image quality metrics strongly correlated with accuracy across structural and mechanical criteria (r > 0.89). Conclusion The proposed method may enable accurate measurements of bone structure and strength with a radiation dose on par with current clinical imaging protocols, improving the viability of clinical CT for assessing bone health.Keywords: CT, Image Postprocessing, Skeletal-Appendicular, Long Bones, Radiation Effects, Quantification, Prognosis, Semisupervised Learning Online supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
- Trevor J Chan
- From the Departments of Bioengineering (T.J.C.), Radiology (T.J.C., C.S.R.), and Orthopedic Surgery (C.S.R.), University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243
| | - Chamith S Rajapakse
- From the Departments of Bioengineering (T.J.C.), Radiology (T.J.C., C.S.R.), and Orthopedic Surgery (C.S.R.), University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243
| |
Collapse
|
15
|
Alruily M, Said W, Mostafa AM, Ezz M, Elmezain M. Breast Ultrasound Images Augmentation and Segmentation Using GAN with Identity Block and Modified U-Net 3. SENSORS (BASEL, SWITZERLAND) 2023; 23:8599. [PMID: 37896692 PMCID: PMC10610596 DOI: 10.3390/s23208599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/10/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023]
Abstract
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. The framework contains two main stages: augmentation and segmentation of ultrasound images. The augmentation of the ultrasounds is applied using generative adversarial networks (GAN) with nonlinear identity block, label smoothing, and a new loss function. The segmentation of the ultrasounds applied a modified U-Net 3+. The hybrid approach achieves efficient results in the segmentation and augmentation steps compared with the other available methods for the same task. The modified version of the GAN with the nonlinear identity block overcomes different types of modified GAN in the ultrasound augmentation process, such as speckle GAN, UltraGAN, and deep convolutional GAN. The modified U-Net 3+ also overcomes the different architectures of U-Nets in the segmentation process. The GAN with nonlinear identity blocks achieved an inception score of 14.32 and a Fréchet inception distance of 41.86 in the augmenting process. The GAN with identity achieves a smaller value in Fréchet inception distance (FID) and a bigger value in inception score; these results prove the model's efficiency compared with other versions of GAN in the augmentation process. The modified U-Net 3+ architecture achieved a Dice Score of 95.49% and an Accuracy of 95.67%.
Collapse
Affiliation(s)
- Meshrif Alruily
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ayman Mohamed Mostafa
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mohamed Ezz
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mahmoud Elmezain
- Computer Science Department, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| |
Collapse
|
16
|
Abstract
In 1971, the first patient CT examination by Ambrose and Hounsfield paved the way for not only volumetric imaging of the brain but of the entire body. From the initial 5-minute scan for a 180° rotation to today's 0.24-second scan for a 360° rotation, CT technology continues to reinvent itself. This article describes key historical milestones in CT technology from the earliest days of CT to the present, with a look toward the future of this essential imaging modality. After a review of the beginnings of CT and its early adoption, the technical steps taken to decrease scan times-both per image and per examination-are reviewed. Novel geometries such as electron-beam CT and dual-source CT have also been developed in the quest for ever-faster scans and better in-plane temporal resolution. The focus of the past 2 decades on radiation dose optimization and management led to changes in how exposure parameters such as tube current and tube potential are prescribed such that today, examinations are more customized to the specific patient and diagnostic task than ever before. In the mid-2000s, CT expanded its reach from gray-scale to color with the clinical introduction of dual-energy CT. Today's most recent technical innovation-photon-counting CT-offers greater capabilities in multienergy CT as well as spatial resolution as good as 125 μm. Finally, artificial intelligence is poised to impact both the creation and processing of CT images, as well as automating many tasks to provide greater accuracy and reproducibility in quantitative applications.
Collapse
Affiliation(s)
- Cynthia H. McCollough
- Department of Radiology, Mayo Clinic, 200 First St SW Rochester, MN, United States 55905
| | | |
Collapse
|
17
|
Missert AD, Hsieh SS, Ferrero A, McCollough CH. Supervised Learning for CT Denoising and Deconvolution Without High-Resolution Reference Images. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.31.23294861. [PMID: 37693583 PMCID: PMC10491378 DOI: 10.1101/2023.08.31.23294861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Purpose Convolutional neural networks (CNNs) have been proposed for super-resolution in CT, but training of CNNs requires high-resolution reference data. Higher spatial resolution can also be achieved using deconvolution, but conventional deconvolution approaches amplify noise. We develop a CNN that mitigates increasing noise and that does not require higher-resolution reference images. Methods Our model includes a noise reduction CNN and a deconvolution CNN that are separately trained. The noise reduction CNN is a U-Net, similar to other noise reduction CNNs found in the literature. The deconvolution CNN uses an autoencoder, where the decoder is fixed and provided as a hyperparameter that represents the system point spread function. The encoder is trained to provide a deconvolution that does not amplify noise. Ringing can occur from deconvolution but is controlled with a difference of gradients loss function term. Our technique was demonstrated on a variety of patient images and on ex vivo kidney stones. Results The noise reduction and deconvolution CNNs produced visually sharper images at low noise. In ex vivo mixed kidney stones, better visual delineation of the kidney stone components could be seen. Conclusions A noise reduction and deconvolution CNN improves spatial resolution and reduces noise without requiring higher-resolution reference images.
Collapse
|
18
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
19
|
Improving the diagnostic performance of computed tomography angiography for intracranial large arterial stenosis by a novel super-resolution algorithm based on multi-scale residual denoising generative adversarial network. Clin Imaging 2023; 96:1-8. [PMID: 36731372 DOI: 10.1016/j.clinimag.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/12/2023] [Accepted: 01/18/2023] [Indexed: 01/30/2023]
Abstract
BACKGROUND Computed tomography angiography (CTA) is very popular because it is characterized by rapidity and accessibility. However, CTA is inferior to digital subtraction angiography (DSA) in the diagnosis of intracranial artery stenosis or occlusion. DSA is an invasive examination, so we optimized the quality of cephalic CTA images. METHODS We used 5000 CTA images to train multi-scale residual denoising generative adversarial network (MRDGAN). And then 71 CTA images with intracranial large arterial stenosis were treated by Super-Resolution based on Generative Adversarial Network (SRGAN), Enhanced Super-Resolution based on Generative Adversarial Network (ESRGAN) and post-trained MRDGAN, respectively. Peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM) of the SRGAN, ESRGAN, MRDGAN and original CTA images were measured respectively. The qualities of MRDGAN and original images were visually assessed using a 4-point scale. The diagnostic coherence of digital subtraction angiography (DSA) with MRDGAN and original images was analyzed. RESULTS The PSNR was significantly higher in the MRDGAN CTA images (35.96 ± 1.51) than in the original (31.51 ± 1.43), SRGAN (25.75 ± 1.18) and ESRGAN (30.36 ± 1.05) CTA images (all P < 0.001). The SSIM was significantly higher in the MRDGAN CTA images (0.95 ± 0.02) than in the SRGAN (0.88 ± 0.03) and ESRGAN (0.90 ± 0.02) CTA images (all P < 0.01). The visual assessment was significantly higher in the MRDGAN CTA images (3.52 ± 0.58) than in the original CTA images (2.39 ± 0.69) (P < 0.05). The diagnostic coherence between MRDGAN and DSA (κ = 0.89) was superior to that between original images and DSA (κ = 0.62). CONCLUSION Our MRDGAN can effectively optimize original CTA images and improve its clinical diagnostic value for intracranial large artery stenosis.
Collapse
|
20
|
Kim H, Lee H, Lee D. Deep learning-based computed tomographic image super-resolution via wavelet embedding. Radiat Phys Chem Oxf Engl 1993 2023; 205:110718. [PMID: 37384306 PMCID: PMC10299762 DOI: 10.1016/j.radphyschem.2022.110718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Effort to realize high-resolution medical images have been made steadily. In particular, super resolution technology based on deep learning is making excellent achievement in computer vision recently. In this study, we developed a model that can dramatically increase the spatial resolution of medical images using deep learning technology, and we try to demonstrate the superiority of proposed model by analyzing it quantitatively. We simulated the computed tomography images with various detector pixel size and tried to restore the low-resolution image to high resolution image. We set the pixel size to 0.5, 0.8 and 1 mm2 for low resolution image and the high-resolution image, which were used for ground truth, was simulated with 0.25 mm2 pixel size. The deep learning model that we used was a fully convolution neural network based on residual structure. The result image demonstrated that proposed super resolution convolution neural network improve image resolution significantly. We also confirmed that PSNR and MTF was improved up to 38 % and 65% respectively. The quality of the prediction image is not significantly different depending on the quality of the input image. In addition, the proposed technique not only increases image resolution but also has some effect on noise reduction. In conclusion, we developed deep learning architectures for improving image resolution of computed tomography images. We quantitatively confirmed that the proposed technique effectively improves image resolution without distorting the anatomical structures.
Collapse
Affiliation(s)
- Hyeongsub Kim
- School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang 37674, Republic of Korea
- Deepnoid Inc., Seoul 08376, South Korea
| | - Haenghwa Lee
- Department of Neurosugery, Ilsan Paik Hospital, College of Medicine, Inje University, Juhwa-ro Ilsanseo-gu, Goyang-si, Goyang-si, Gyeonggi-do, 10380, Republic of Korea
| | - Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
21
|
Mohammad-Rahimi H, Vinayahalingam S, Mahmoudinia E, Soltani P, Bergé SJ, Krois J, Schwendicke F. Super-Resolution of Dental Panoramic Radiographs Using Deep Learning: A Pilot Study. Diagnostics (Basel) 2023; 13:996. [PMID: 36900140 PMCID: PMC10000385 DOI: 10.3390/diagnostics13050996] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/27/2023] [Accepted: 03/02/2023] [Indexed: 03/08/2023] Open
Abstract
Using super-resolution (SR) algorithms, an image with a low resolution can be converted into a high-quality image. Our objective was to compare deep learning-based SR models to a conventional approach for improving the resolution of dental panoramic radiographs. A total of 888 dental panoramic radiographs were obtained. Our study involved five state-of-the-art deep learning-based SR approaches, including SR convolutional neural networks (SRCNN), SR generative adversarial network (SRGAN), U-Net, Swin for image restoration (SwinIr), and local texture estimator (LTE). Their results were compared with one another and with conventional bicubic interpolation. The performance of each model was evaluated using the metrics of mean squared error (MSE), peak signal-to-noise ratio (PNSR), structural similarity index (SSIM), and mean opinion score by four experts (MOS). Among all the models evaluated, the LTE model presented the highest performance, with MSE, SSIM, PSNR, and MOS results of 7.42 ± 0.44, 39.74 ± 0.17, 0.919 ± 0.003, and 3.59 ± 0.54, respectively. Additionally, compared with low-resolution images, the output of all the used approaches showed significant improvements in MOS evaluation. A significant enhancement in the quality of panoramic radiographs can be achieved by SR. The LTE model outperformed the other models.
Collapse
Affiliation(s)
- Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, 10117 Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, 6525 GA Nijmegen, The Netherlands
| | - Erfan Mahmoudinia
- Department of Computer Engineering, Sharif University of Technology, Tehran 11155, Iran
| | - Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Isfahan 81746, Iran
| | - Stefaan J. Bergé
- Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Centre, 6525 GA Nijmegen, The Netherlands
| | - Joachim Krois
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, 10117 Berlin, Germany
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, 10117 Berlin, Germany
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
| |
Collapse
|
22
|
Azour L, Hu Y, Ko JP, Chen B, Knoll F, Alpert JB, Brusca-Augello G, Mason DM, Wickstrom ML, Kwon YJF, Babb J, Liang Z, Moore WH. Deep Learning Denoising of Low-Dose Computed Tomography Chest Images: A Quantitative and Qualitative Image Analysis. J Comput Assist Tomogr 2023; 47:212-219. [PMID: 36790870 DOI: 10.1097/rct.0000000000001405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Abstract
PURPOSE To assess deep learning denoised (DLD) computed tomography (CT) chest images at various low doses by both quantitative and qualitative perceptual image analysis. METHODS Simulated noise was inserted into sinogram data from 32 chest CTs acquired at 100 mAs, generating anatomically registered images at 40, 20, 10, and 5 mAs. A DLD model was developed, with 23 scans selected for training, 5 for validation, and 4 for test.Quantitative analysis of perceptual image quality was assessed with Structural SIMilarity Index (SSIM) and Fréchet Inception Distance (FID). Four thoracic radiologists graded overall diagnostic image quality, image artifact, visibility of small structures, and lesion conspicuity. Noise-simulated and denoised image series were evaluated in comparison with one another, and in comparison with standard 100 mAs acquisition at the 4 mAs levels. Statistical tests were conducted at the 2-sided 5% significance level, with multiple comparison correction. RESULTS At the same mAs levels, SSIM and FID between noise-simulated and reconstructed DLD images indicated that images were closer to a perfect match with increasing mAs (closer to 1 for SSIM, and 0 for FID).In comparing noise-simulated and DLD images to standard-dose 100-mAs images, DLD improved SSIM and FID. Deep learning denoising improved SSIM of 40-, 20-, 10-, and 5-mAs simulations in comparison with standard-dose 100-mAs images, with change in SSIM from 0.91 to 0.94, 0.87 to 0.93, 0.67 to 0.87, and 0.54 to 0.84, respectively. Deep learning denoising improved FID of 40-, 20-, 10-, and 5-mAs simulations in comparison with standard-dose 100-mAs images, with change in FID from 20 to 13, 46 to 21, 104 to 41, and 148 to 69, respectively.Qualitative image analysis showed no significant difference in lesion conspicuity between DLD images at any mAs in comparison with 100-mAs images. Deep learning denoising images at 10 and 5 mAs were rated lower for overall diagnostic image quality ( P < 0.001), and at 5 mAs lower for overall image artifact and visibility of small structures ( P = 0.002), in comparison with 100 mAs. CONCLUSIONS Deep learning denoising resulted in quantitative improvements in image quality. Qualitative assessment demonstrated DLD images at or less than 10 mAs to be rated inferior to standard-dose images.
Collapse
Affiliation(s)
- Lea Azour
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Yunan Hu
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Jane P Ko
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Baiyu Chen
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Florian Knoll
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Jeffrey B Alpert
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | | | - Derek M Mason
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Maj L Wickstrom
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | | | - James Babb
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| | - Zhengrong Liang
- Departments of Radiology, Biomedical Engineering, Computer Science, and Electrical Engineering, Stony Brook University, Stony Brook, NY
| | - William H Moore
- From the Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health
| |
Collapse
|
23
|
Mannam V, Howard S. Small training dataset convolutional neural networks for application-specific super-resolution microscopy. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:036501. [PMID: 36925620 PMCID: PMC10013193 DOI: 10.1117/1.jbo.28.3.036501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/09/2023] [Indexed: 06/18/2023]
Abstract
Significance Machine learning (ML) models based on deep convolutional neural networks have been used to significantly increase microscopy resolution, speed [signal-to-noise ratio (SNR)], and data interpretation. The bottleneck in developing effective ML systems is often the need to acquire large datasets to train the neural network. We demonstrate how adding a "dense encoder-decoder" (DenseED) block can be used to effectively train a neural network that produces super-resolution (SR) images from conventional microscopy diffraction-limited (DL) images trained using a small dataset [15 fields of view (FOVs)]. Aim The ML helps to retrieve SR information from a DL image when trained with a massive training dataset. The aim of this work is to demonstrate a neural network that estimates SR images from DL images using modifications that enable training with a small dataset. Approach We employ "DenseED" blocks in existing SR ML network architectures. DenseED blocks use a dense layer that concatenates features from the previous convolutional layer to the next convolutional layer. DenseED blocks in fully convolutional networks (FCNs) estimate the SR images when trained with a small training dataset (15 FOVs) of human cells from the Widefield2SIM dataset and in fluorescent-labeled fixed bovine pulmonary artery endothelial cells samples. Results Conventional ML models without DenseED blocks trained on small datasets fail to accurately estimate SR images while models including the DenseED blocks can. The average peak SNR (PSNR) and resolution improvements achieved by networks containing DenseED blocks are ≈ 3.2 dB and 2 × , respectively. We evaluated various configurations of target image generation methods (e.g., experimentally captured a target and computationally generated target) that are used to train FCNs with and without DenseED blocks and showed that including DenseED blocks in simple FCNs outperforms compared to simple FCNs without DenseED blocks. Conclusions DenseED blocks in neural networks show accurate extraction of SR images even if the ML model is trained with a small training dataset of 15 FOVs. This approach shows that microscopy applications can use DenseED blocks to train on smaller datasets that are application-specific imaging platforms and there is promise for applying this to other imaging modalities, such as MRI/x-ray, etc.
Collapse
Affiliation(s)
- Varun Mannam
- University of Notre Dame, Department of Electrical Engineering, Notre Dame, Indiana, United States
| | - Scott Howard
- University of Notre Dame, Department of Electrical Engineering, Notre Dame, Indiana, United States
| |
Collapse
|
24
|
Zhong X, Liang N, Cai A, Yu X, Li L, Yan B. Super-resolution image reconstruction from sparsity regularization and deep residual-learned priors. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:319-336. [PMID: 36683486 DOI: 10.3233/xst-221299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND Computed tomography (CT) plays an important role in the field of non-destructive testing. However, conventional CT images often have blurred edge and unclear texture, which is not conducive to the follow-up medical diagnosis and industrial testing work. OBJECTIVE This study aims to generate high-resolution CT images using a new CT super-resolution reconstruction method combining with the sparsity regularization and deep learning prior. METHODS The new method reconstructs CT images through a reconstruction model incorporating image gradient L0-norm minimization and deep image priors using a plug-and-play super-resolution framework. The deep learning priors are learned from a deep residual network and then plugged into the proposed new framework, and alternating direction method of multipliers is utilized to optimize the iterative solution of the model. RESULTS The simulation data analysis results show that the new method improves the signal-to-noise ratio (PSNR) by 7% and the modulation transfer function (MTF) curves show that the value of MTF50 increases by 0.02 factors compared with the result of deep plug-and-play super-resolution. Additionally, the real CT image data analysis results show that the new method improves the PSNR by 5.1% and MTF50 by 0.11 factors. CONCLUSION Both simulation and real data experiments prove that the proposed new CT super-resolution method using deep learning priors can reconstruct CT images with lower noise and better detail recovery. This method is flexible, effective and extensive for low-resolution CT image super-resolution.
Collapse
Affiliation(s)
- Xinyi Zhong
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| | - Ningning Liang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| | - Ailong Cai
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| | - Xiaohuan Yu
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| | - Lei Li
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, Zhengzhou, Henan, China
| |
Collapse
|
25
|
Pack JD, Xu M, Wang G, Baskaran L, Min J, De Man B. Cardiac CT blooming artifacts: clinical significance, root causes and potential solutions. Vis Comput Ind Biomed Art 2022; 5:29. [PMID: 36484886 PMCID: PMC9733770 DOI: 10.1186/s42492-022-00125-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 11/03/2022] [Indexed: 12/13/2022] Open
Abstract
This review paper aims to summarize cardiac CT blooming artifacts, how they present clinically and what their root causes and potential solutions are. A literature survey was performed covering any publications with a specific interest in calcium blooming and stent blooming in cardiac CT. The claims from literature are compared and interpreted, aiming at narrowing down the root causes and most promising solutions for blooming artifacts. More than 30 journal publications were identified with specific relevance to blooming artifacts. The main reported causes of blooming artifacts are the partial volume effect, motion artifacts and beam hardening. The proposed solutions are classified as high-resolution CT hardware, high-resolution CT reconstruction, subtraction techniques and post-processing techniques, with a special emphasis on deep learning (DL) techniques. The partial volume effect is the leading cause of blooming artifacts. The partial volume effect can be minimized by increasing the CT spatial resolution through higher-resolution CT hardware or advanced high-resolution CT reconstruction. In addition, DL techniques have shown great promise to correct for blooming artifacts. A combination of these techniques could avoid repeat scans for subtraction techniques.
Collapse
Affiliation(s)
- Jed D. Pack
- grid.418143.b0000 0001 0943 0267GE Research, Niskayuna, NY 12309 USA
| | - Mufeng Xu
- grid.33647.350000 0001 2160 9198Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Ge Wang
- grid.33647.350000 0001 2160 9198Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Lohendran Baskaran
- grid.5386.8000000041936877XWeill Cornell Medicine, New York, NY 10065 USA ,grid.419385.20000 0004 0620 9905National Heart Centre, Singapore, 169609 Singapore
| | - James Min
- grid.5386.8000000041936877XWeill Cornell Medicine, New York, NY 10065 USA ,Cleerly, New York, NY 10065 USA
| | - Bruno De Man
- grid.418143.b0000 0001 0943 0267GE Research, Niskayuna, NY 12309 USA
| |
Collapse
|
26
|
Lin X, Zhou X, Tong T, Nie X, Wang L, Zheng H, Li J, Xue E, Chen S, Zheng M, Chen C, Jiang H, Du M, Gao Q. A Super-resolution Guided Network for Improving Automated Thyroid Nodule Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107186. [PMID: 36334526 DOI: 10.1016/j.cmpb.2022.107186] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 10/03/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE A thyroid nodule is an abnormal lump that grows in the thyroid gland, which is the early symptom of thyroid cancer. In order to diagnose and treat thyroid cancer at the earliest stage, it is desired to characterize the nodule accurately. Ultrasound thyroid nodules segmentation is a challenging task due to the speckle noise, intensity heterogeneity, low contrast and low resolution. In this paper, we propose a novel framework to improve the accuracy of thyroid nodules segmentation. METHODS Different from previous work, a super-resolution reconstruction network is firstly constructed to upscale the resolution of the input ultrasound image. After that, our proposed N-shape network is utilized to perform the segmentation task. The guidance of super-resolution reconstruction network can make the high-frequency information of the input thyroid ultrasound image richer and more comprehensive than the original image. Our N-shape network consists of several atrous spatial pyramid pooling blocks, a multi-scale input layer, a U-shape convolutional network with attention blocks and a proposed parallel atrous convolution(PAC) module. These modules are conducive to capture context information at multiple scales so that semantic features can be fully utilized for lesion segmentation. Especially, our proposed PAC module is beneficial to further improve the segmentation by extracting high-level semantic features from different receptive fields. We use the UTNI-2021 dataset for model training, validating and testing. RESULTS The experimental results show that our proposed method achieve a Dice value of 91.9%, a mIoU value of 87.0%, a Precision value of 88.0%, a Recall value 83.7% and a F1-score value of 84.3%, which outperforms most state-of-the-art methods. CONCLUSIONS Our method achieves the best performance on the UTNI-2021 dataset and provides a new way of ultrasound image segmentation. We believe that our method can provide doctors with reliable auxiliary diagnosis information in clinical practice.
Collapse
Affiliation(s)
- Xingtao Lin
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University; Imperial Vision Technology.
| | - Xingqing Nie
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Luoyan Wang
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Haonan Zheng
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Jing Li
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | | | - Shun Chen
- Fujian Medical University Union Hospital.
| | | | - Cong Chen
- Fujian Medical University Union Hospital
| | - Haiyan Jiang
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University; Imperial Vision Technology
| |
Collapse
|
27
|
Kim J, Kim JJ. Topology Optimization-Based Localized Bone Microstructure Reconstruction for Image Resolution Enhancement: Accuracy and Efficiency. Bioengineering (Basel) 2022; 9:644. [PMID: 36354554 PMCID: PMC9687309 DOI: 10.3390/bioengineering9110644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 10/26/2022] [Accepted: 11/01/2022] [Indexed: 10/29/2023] Open
Abstract
Topology optimization is currently the only way to provide bone microstructure information by enhancing a 600 μm low-resolution image into a 50 μm high-resolution image. Particularly, the recently proposed localized reconstruction method for the region of interest has received much attention because it has a high possibility to overcome inefficiency such as iterative large-scale problems of the conventional reconstruction. Despite the great potential, the localized method should be thoroughly validated for clinical application. This study aims to quantitatively validate the topology optimization-based localized bone microstructure reconstruction method in terms of accuracy and efficiency by comparing the conventional method. For this purpose, this study re-constructed bone microstructure for three regions of interest in the proximal femur by localized and conventional methods, respectively. In the comparison, the dramatically reduced total progress time by at least 88.2% (20.1 h) as well as computational resources by more than 95.9% (54.0 gigabytes) were found. Moreover, very high reconstruction accuracy in the trabecular alignment (up to 99.6%) and morphometric indices (up to 2.71%) was also found. These results indicated that the localized method could reconstruct bone microstructure, much more effectively preserving the originality of the conventional method.
Collapse
Affiliation(s)
| | - Jung Jin Kim
- Department of Mechanical Engineering, Keimyung University, Daegu 42601, Korea
| |
Collapse
|
28
|
王 华, 孙 挺. [Medical image super-resolution reconstruction via multi-scale information distillation network under multi-scale geometric transform domain]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2022; 39:887-896. [PMID: 36310477 PMCID: PMC9927725 DOI: 10.7507/1001-5515.202109057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 08/02/2022] [Indexed: 06/16/2023]
Abstract
High resolution (HR) magnetic resonance images (MRI) or computed tomography (CT) images can provide clearer anatomical details of human body, which facilitates early diagnosis of the diseases. However, due to the imaging system, imaging environment and human factors, it is difficult to obtain clear high-resolution images. In this paper, we proposed a novel medical image super resolution (SR) reconstruction method via multi-scale information distillation (MSID) network in the non-subsampled shearlet transform (NSST) domain, namely NSST-MSID network. We first proposed a MSID network that mainly consisted of a series of stacked MSID blocks to fully exploit features from images and effectively restore the low resolution (LR) images to HR images. In addition, most previous methods predict the HR images in the spatial domain, producing over-smoothed outputs while losing texture details. Thus, we viewed the medical image SR task as the prediction of NSST coefficients, which make further MSID network keep richer structure details than that in spatial domain. Finally, the experimental results on our constructed medical image datasets demonstrated that the proposed method was capable of obtaining better peak signal to noise ratio (PSNR), structural similarity (SSIM) and root mean square error (RMSE) values and keeping global topological structure and local texture detail better than other outstanding methods, which achieves good medical image reconstruction effect.
Collapse
Affiliation(s)
- 华东 王
- 周口师范学院 计算机科学与技术学院(河南周口 466001)School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, Henan 466001, P. R. China
| | - 挺 孙
- 周口师范学院 计算机科学与技术学院(河南周口 466001)School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, Henan 466001, P. R. China
- 西北大学 可视化研究所(西安 710049)Institute of Visualization Technology, Northwest University, Xi’an 710049, P. R. China
| |
Collapse
|
29
|
Li D, Ma L, Li J, Qi S, Yao Y, Teng Y. A comprehensive survey on deep learning techniques in CT image quality improvement. Med Biol Eng Comput 2022; 60:2757-2770. [DOI: 10.1007/s11517-022-02631-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 06/15/2022] [Indexed: 11/28/2022]
|
30
|
Cao Q, Mao Y, Qin L, Quan G, Yan F, Yang W. Improving image quality and lung nodule detection for low-dose chest CT by using generative adversarial network reconstruction. Br J Radiol 2022; 95:20210125. [PMID: 35994298 PMCID: PMC9815729 DOI: 10.1259/bjr.20210125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 04/01/2022] [Accepted: 07/21/2022] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES To investigate the improvement of two denoising models with different learning targets (Dir and Res) of generative adversarial network (GAN) on image quality and lung nodule detectability in chest low-dose CT (LDCT). METHODS In training phase, by using LDCT images simulated from standard dose CT (SDCT) of 200 participants, Dir model was trained targeting SDCT images, while Res model targeting the residual between SDCT and LDCT images. In testing phase, a phantom and 95 chest LDCT, exclusively with training data, were included for evaluation of imaging quality and pulmonary nodules detectability. RESULTS For phantom images, structural similarity, peak signal-to-noise ratio of both Res and Dir models were higher than that of LDCT. Standard deviation of Res model was the lowest. For patient images, image noise and quality of both two models, were better than that of LDCT. Artifacts of Res model was less than that of LDCT. The diagnostic sensitivity of lung nodule by two readers for LDCT, Res and Dir model, were 72/77%, 79/83% and 72/79% respectively. CONCLUSION Two GAN denoising models, including Res and Dir trained with different targets, could effectively reduce image noise of chest LDCT. The image quality evaluation scoring and nodule detectability of Res denoising model was better than that of Dir denoising model and that of hybrid IR images. ADVANCES IN KNOWLEDGE The GAN-trained model, which learned the residual between SDCT and LDCT images, reduced image noise and increased the lung nodule detectability by radiologists on chest LDCT. This demonstrates the potential for clinical benefit.
Collapse
Affiliation(s)
- Qiqi Cao
- Department of Radiology, Ruijin Hospital affiliated to School of Medicine, Shanghai Jiao Tong University, Shanghai Jiao Tong, China
| | - Yifu Mao
- Department of CT reconstruction physics algorithm, Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Le Qin
- Department of Radiology, Ruijin Hospital affiliated to School of Medicine, Shanghai Jiao Tong University, Shanghai Jiao Tong, China
| | - Guotao Quan
- Department of CT reconstruction physics algorithm, Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital affiliated to School of Medicine, Shanghai Jiao Tong University, Shanghai Jiao Tong, China
| | - Wenjie Yang
- Department of Radiology, Ruijin Hospital affiliated to School of Medicine, Shanghai Jiao Tong University, Shanghai Jiao Tong, China
| |
Collapse
|
31
|
Hao H, Xu C, Zhang D, Yan Q, Zhang J, Liu Y, Zhao Y. Sparse-based Domain Adaptation Network for OCTA Image Super-Resolution Reconstruction. IEEE J Biomed Health Inform 2022; 26:4402-4413. [PMID: 35895639 DOI: 10.1109/jbhi.2022.3194025] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal Optical Coherence Tomography Angiography (OCTA) with high-resolution is important for the quantification and analysis of retinal vasculature. However, the resolution of OCTA images is inversely proportional to the field of view at the same sampling frequency, which is not conducive to clinicians for analyzing larger vascular areas. In this paper, we propose a novel Sparse-based domain Adaptation Super-Resolution network (SASR) for the reconstruction of realistic [Formula: see text]/low-resolution (LR) OCTA images to high-resolution (HR) representations. To be more specific, we first perform a simple degradation of the [Formula: see text]/high-resolution (HR) image to obtain the synthetic LR image. An efficient registration method is then employed to register the synthetic LR with its corresponding [Formula: see text] image region within the [Formula: see text] image to obtain the cropped realistic LR image. We then propose a multi-level super-resolution model for the fully-supervised reconstruction of the synthetic data, guiding the reconstruction of the realistic LR images through a generative-adversarial strategy that allows the synthetic and realistic LR images to be unified in the feature domain. Finally, a novel sparse edge-aware loss is designed to dynamically optimize the vessel edge structure. Extensive experiments on two OCTA sets have shown that our method performs better than state-of-the-art super-resolution reconstruction methods. In addition, we have investigated the performance of the reconstruction results on retina structure segmentations, which further validate the effectiveness of our approach.
Collapse
|
32
|
Classification and Reconstruction of Biomedical Signals Based on Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6548811. [PMID: 35909845 PMCID: PMC9334110 DOI: 10.1155/2022/6548811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/25/2022] [Accepted: 06/30/2022] [Indexed: 11/18/2022]
Abstract
The efficient biological signal processing method can effectively improve the efficiency of researchers to explore the work of life mechanism, so as to better reveal the relationship between physiological structure and function, thus promoting the generation of major biological discoveries; high-precision medical signal analysis strategy can, to a certain extent, share the pressure of doctors’ clinical diagnosis and assist them to formulate more favorable plans for disease prevention and treatment, so as to alleviate patients’ physical and mental pain and improve the overall health level of the society. This article in biomedical signal is very representative of the two types of signals: mammary gland molybdenum target X-ray image (mammography) and the EEG signal as the research object, combined with the deep learning field of CNN; the most representative model is two kinds of biomedical signal classification, and reconstruction methods conducted a series of research: (1) a new classification method of breast masses based on multi-layer CNN is proposed. The method includes a CNN feature representation network for breast masses and a feature decision mechanism that simulates the physician’s diagnosis process. By comparing with the objective classification accuracy of other methods for the identification of benign and malignant breast masses, the method achieved the highest classification accuracy of 97.0% under different values of c and gamma, which further verified the effectiveness of the proposed method in the identification of breast masses based on molybdenum target X-ray images. (2) An EEG signal classification method based on spatiotemporal fusion CNN is proposed. This method includes a multi-channel input classification network focusing on spatial information of EEG signals, a single-channel input classification network focusing on temporal information of EEG signals, and a spatial-temporal fusion strategy. Through comparative experiments on EEG signal classification tasks, the effectiveness of the proposed method was verified from the aspects of objective classification accuracy, number of model parameters, and subjective evaluation of CNN feature representation validity. It can be seen that the method proposed in this paper not only has high accuracy, but also can be well applied to the classification and reconstruction of biomedical signals.
Collapse
|
33
|
Honjo T, Ueda D, Katayama Y, Shimazaki A, Jogo A, Kageyama K, Murai K, Tatekawa H, Fukumoto S, Yamamoto A, Miki Y. Visual and quantitative evaluation of microcalcifications in mammograms with deep learning-based super-resolution. Eur J Radiol 2022; 154:110433. [PMID: 35834858 DOI: 10.1016/j.ejrad.2022.110433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/30/2022] [Accepted: 06/30/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To evaluate visually and quantitatively the performance of a deep-learning-based super-resolution (SR) model for microcalcifications in digital mammography. METHOD Mammograms were consecutively collected from 5080 patients who underwent breast cancer screening from January 2015 to March 2017. Of these, 93 patients (136 breasts, mean age, 50 ± 7 years) had microcalcifications in their breasts on mammograms. We applied an artificial intelligence model known as a fast SR convolutional neural network to the mammograms. SR and original mammograms were visually evaluated by four breast radiologists using a 5-point scale (1: original mammograms are strongly preferred, 5: SR mammograms are strongly preferred) for the detection, diagnostic quality, contrast, sharpness, and noise of microcalcifications. Mammograms were quantitatively evaluated using a perception-based image-quality evaluator (PIQE). RESULTS All radiologists rated the SR mammograms better than the original ones in terms of detection, diagnostic quality, contrast, and sharpness of microcalcifications. These ratings were significantly different according to the Wilcoxon signed-rank test (p <.001), while the noise score of the three radiologists was significantly lower (p <.001). According to PIQE, SR mammograms were rated better than the original mammograms, showing a significant difference by paired t-test (p <.001). CONCLUSION An SR model based on deep learning can improve the visibility of microcalcifications in mammography and help detect and diagnose them in mammograms.
Collapse
Affiliation(s)
- Takashi Honjo
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan; Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| | - Yutaka Katayama
- Department of Radiology, Osaka Metropolitan University Hospital, Osaka, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Atsushi Jogo
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Ken Kageyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Kazuki Murai
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shinya Fukumoto
- Department of Premier Preventive Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
34
|
Med-SRNet: GAN-Based Medical Image Super-Resolution via High-Resolution Representation Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1744969. [PMID: 35747717 PMCID: PMC9210125 DOI: 10.1155/2022/1744969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/16/2022] [Accepted: 05/10/2022] [Indexed: 11/18/2022]
Abstract
High-resolution (HR) medical imaging data provide more anatomical details of human body, which facilitates early-stage disease diagnosis. But it is challenging to get clear HR medical images because of the limiting factors, such as imaging systems, imaging environments, and human factors. This work presents a novel medical image super-resolution (SR) method via high-resolution representation learning based on generative adversarial network (GAN), namely, Med-SRNet. We use GAN as backbone of SR considering the advantages of GAN that can significantly reconstruct the visual quality of the images, and the high-frequency details of the images are more realistic in the image SR task. Furthermore, we employ the HR network (HRNet) in GAN generator to maintain the HR representations and repeatedly use multi-scale fusions to strengthen HR representations for facilitating SR. Moreover, we adopt deconvolution operations to recover high-quality HR representations from all the parallel lower resolution (LR) streams with the aim to yield richer aggregated features, instead of simple bilinear interpolation operations used in HRNetV2. When evaluated on a home-made medical image dataset and two public COVID-19 CT datasets, the proposed Med-SRNet outperforms other leading edge methods, which obtains higher peak signal to noise ratio (PSNR) values and structural similarity (SSIM) values, i.e., maximum improvement of 1.75 and minimum increase of 0.433 on the PSNR metric for “Brain” test sets under 8× and maximum improvement of 0.048 and minimum increase of 0.016 on the SSIM metric for “Lung” test sets under 8× compared with other methods.
Collapse
|
35
|
Chen S, Hao X, Pan B, Huang X. Super-Resolution Residual U-Net Model for the Reconstruction of Limited-Data Tunable Diode Laser Absorption Tomography. ACS OMEGA 2022; 7:18722-18731. [PMID: 35694508 PMCID: PMC9178763 DOI: 10.1021/acsomega.2c01435] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 05/12/2022] [Indexed: 06/15/2023]
Abstract
Resolution is an important index for evaluating the reconstruction performance of temperature distributions in a combustion environment, and a higher resolution is necessary to obtain more precise combustion diagnoses. Tunable diode laser absorption tomography (TDLAT) has proven to be a powerful combustion diagnosis method for efficient detection. However, restricted by the line-of-sight (LOS) measurement, the reconstruction resolution of TDLAT was dependent on the size of the detection data, which made it difficult to obtain sufficient data for extreme environmental measurements. This severely limits the development of TDLAT in combustion diagnosis. To overcome this limitation, we proposed a super-resolution reconstruction method based on the super-resolution residual U-Net (SRResUNet) to improve the reconstruction resolution using a software method that could take full advantage of residual networks and U-Net to extract the deep features from the limited data of TDLAT to reconstruct the temperature distribution efficiently. A simulation study was conducted to investigate how the parameters would affect the performance of the super-resolution model and to optimize the reconstruction. The results show that our SRResUNet model can effectively improve the accuracy of reconstruction with super-resolution, with good antinoise performance, with the errors of 2-, 4-, and 8-times super-resolution reconstructions of approximately 5.3, 7.4, and 9.7%, respectively. The successful demonstration of SRResUNet in this work indicates the possible applications of other deep learning methods, such as enhanced super-resolution generative adversarial networks (ESRGANs) for limited-data TDLAT.
Collapse
Affiliation(s)
- Shaogang Chen
- Science
and Technology on Electronic Test and Measurement Laboratory, North University of China, Taiyuan 030051, China
- School
of Instrument and Electronics, North University
of China, Taiyuan 030051, China
| | - Xiaojian Hao
- Science
and Technology on Electronic Test and Measurement Laboratory, North University of China, Taiyuan 030051, China
- School
of Instrument and Electronics, North University
of China, Taiyuan 030051, China
| | - Baowu Pan
- School
of Materials Science and Engineering, North
University of China, Taiyuan 030051, China
| | - Xiaodong Huang
- Science
and Technology on Electronic Test and Measurement Laboratory, North University of China, Taiyuan 030051, China
- School
of Instrument and Electronics, North University
of China, Taiyuan 030051, China
| |
Collapse
|
36
|
Sun K, Gao Y, Xie T, Wang X, Yang Q, Chen L, Wang K, Yu G. A low-cost pathological image digitalization method based on 5 times magnification scanning. Quant Imaging Med Surg 2022; 12:2813-2829. [PMID: 35502389 PMCID: PMC9014144 DOI: 10.21037/qims-21-749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 01/06/2022] [Indexed: 10/10/2023]
Abstract
BACKGROUND Digital pathology has aroused widespread interest in modern pathology. The key to digitalization is to scan the whole slide image (WSI) at high magnification. The file size of each WSI at 40 times magnification (40×) may range from 1 gigabyte (GB) to 5 GB depending on the size of the specimen, which leads to huge storage capacity, very slow scanning and network exchange, seriously increasing time and storage costs for digital pathology. METHODS We design a strategy to scan slides with low resolution (LR) (5×), and a superresolution (SR) method is proposed to restore the image details during diagnosis. The method is based on a multiscale generative adversarial network, which can sequentially generate three high-resolution (HR) images: 10×, 20×, and 40×. A dataset consisting of 100,000 pathological images from 10 types of human body systems is used for training and testing. The differences between the generated images and the real images have been extensively evaluated using quantitative evaluation, visual inspection, medical scoring, and diagnosis. RESULTS The file size of each 5× WSI is approximately 15 Megabytes. The peak-signal-to-noise ratios (PSNRs) of 10× to 40× generated images are 24.167±3.734 dB, 22.272±4.272 dB, and 20.436±3.845 dB, and the structural similarity (SSIM) index values are 0.845±0.089, 0.680±0.150, and 0.559±0.179, which are better than those of other SR networks and conventional digital zoom methods. Visual inspections show that the generated images have details similar to the real images. Visual scoring average with 0.95 confidence interval from three pathologists are 3.630±1.024, 3.700±1.126, and 3.740±1.095, respectively, and the P value of analysis of variance is 0.367, indicating the pathologists confirm that generated images include sufficient information for diagnosis. The average value of the Kappa test of the diagnoses of paired generated and real images is 0.990, meaning the diagnosis of generated images is highly consistent with that of the real images. CONCLUSIONS The proposed method can generate high-quality 10×, 20×, 40× images from 5× images, which can effectively reduce the time and storage costs of digitalization up to 1/64 of the previous costs, which shows the potential for clinical applications and is expected to be an alternative digitalization method after large-scale evaluation.
Collapse
Affiliation(s)
- Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an, China
| | - Ting Xie
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Xun Wang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Qingqing Yang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Le Chen
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, China
| |
Collapse
|
37
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
38
|
Cheng Z, Xie L, Feng C, Wen J. Super-resolution acquisition and reconstruction for cone-beam SPECT with low-resolution detector. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106683. [PMID: 35150999 DOI: 10.1016/j.cmpb.2022.106683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 12/18/2021] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Single-photon emission computed tomography (SPECT) imaging, which provides information that reflects the human body's metabolic processes, has unique application value in disease diagnosis and efficacy evaluation. The imaging resolution of SPECT can be improved by exploiting high-performance detector hardware, but this exploitation generates high research and development costs. In addition, the inherent hardware structure of SPECT requires the use of a collimator, which limits the resolution in SPECT. The objective of this study is to propose a novel super-resolution (SR) reconstruction algorithm with two acquisition methods for cone-beam SPECT with low-resolution (LR) detector. METHODS A SR algorithm with two acquisition methods is proposed for cone-beam SPECT imaging in the projection domain. At each sampling angle, multi LR projections can be obtained by regularly moving the LR detector. For the two proposed acquisition methods, we develop a new SR reconstruction algorithm. Using our SR algorithm, a SR projection with the corresponding sampling angle can be obtained from multi LR projections via multi-iterations, and then, the SR SPECT image can be reconstructed. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), signal-to-noise ratio (SNR) and contrast recovery coefficient (CRC) are used to evaluate the final reconstruction quality. RESULTS The simulation results obtained under clean and noisy conditions verify the effectiveness of our SR algorithm. Three different phantoms are verified separately. 16 LR projections are obtained at each sampling angle, each with 32 × 32 bins. The high-resolution (HR) projection has 128 × 128 bins. The reconstruction result of the SR algorithm obtains an evaluation value that is almost the same as that of the HR reconstruction result. Our results indicate that the resolution of the resulting SPECT image is almost four times higher. CONCLUSIONS The authors develop a SR reconstruction algorithm with two acquisition methods for the cone-beam SPECT system. The simulation results obtained in clean and noisy environments prove that the SR algorithm has potential value, but it needs to be further tested on real equipment.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| | - Lulu Xie
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Cuixia Feng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
39
|
SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022; 8:905-919. [PMID: 35448707 PMCID: PMC9027099 DOI: 10.3390/tomography8020073] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/16/2022] Open
Abstract
There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the ‘Z’ plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.
Collapse
|
40
|
Zuo C, Qian J, Feng S, Yin W, Li Y, Fan P, Han J, Qian K, Chen Q. Deep learning in optical metrology: a review. LIGHT, SCIENCE & APPLICATIONS 2022; 11:39. [PMID: 35197457 PMCID: PMC8866517 DOI: 10.1038/s41377-022-00714-x] [Citation(s) in RCA: 71] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/03/2022] [Accepted: 01/11/2022] [Indexed: 05/20/2023]
Abstract
With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional "physics-based" approach, deep-learning-enabled optical metrology is a kind of "data-driven" approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.
Collapse
Grants
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- National Key R&D Program of China (2017YFF0106403) Leading Technology of Jiangsu Basic Research Plan (BK20192003) National Defense Science and Technology Foundation of China (2019-JCJQ-JJ-381) "333 Engineering" Research Project of Jiangsu Province (BRA2016407) Fundamental Research Funds for the Central Universities (30920032101, 30919011222) Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091801410411)
Collapse
Affiliation(s)
- Chao Zuo
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| | - Jiaming Qian
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Shijie Feng
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Wei Yin
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Yixuan Li
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Pengfei Fan
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- School of Engineering and Materials Science, Queen Mary University of London, London, E1 4NS, UK
| | - Jing Han
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Kemao Qian
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| |
Collapse
|
41
|
Ueki W, Nishii T, Umehara K, Ota J, Higuchi S, Ohta Y, Nagai Y, Murakawa K, Ishida T, Fukuda T. Generative adversarial network-based post-processed image super-resolution technology for accelerating brain MRI: comparison with compressed sensing. Acta Radiol 2022; 64:336-345. [PMID: 35118883 DOI: 10.1177/02841851221076330] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
BACKGROUND It is unclear whether deep-learning-based super-resolution technology (SR) or compressed sensing technology (CS) can accelerate magnetic resonance imaging (MRI) . PURPOSE To compare SR accelerated images with CS images regarding the image similarity to reference 2D- and 3D gradient-echo sequence (GRE) brain MRI. MATERIAL AND METHODS We prospectively acquired 1.3× and 2.0× faster 2D and 3D GRE images of 20 volunteers from the reference time by reducing the matrix size or increasing the CS factor. For SR, we trained the generative adversarial network (GAN), upscaling the low-resolution images to the reference images with twofold cross-validation. We compared the structural similarity (SSIM) index of accelerated images to the reference image. The rate of incorrect answers of a radiologist discriminating faster and reference image was used as a subjective image similarity (ISM) index. RESULTS The SR demonstrated significantly higher SSIM than the CS (SSIM=0.9993-0.999 vs. 0.9947-0.9986; P < 0.001). In 2D GRE, it was challenging to discriminate the SR image from the reference image, compared to the CS (ISM index 40% vs. 17.5% in 1.3×; P = 0.039 and 17.5% vs. 2.5% in 2.0×; P = 0.034). In 3D GRE, the CS revealed a significantly higher ISM index than the SR (22.5% vs. 2.5%; P = 0.011) in 2.0 × faster images. However, the ISM index was identical for the 2.0× CS and 1.3× SR (22.5% vs. 27.5%; P = 0.62) with comparable time costs. CONCLUSION The GAN-based SR outperformed CS in image similarity with 2D GRE for MRI acceleration. In addition, CS was more advantageous in 3D GRE than SR.
Collapse
Affiliation(s)
- Wataru Ueki
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Tatsuya Nishii
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Kensuke Umehara
- Medical Informatics Section, QST Hospital, National Institutes for Quantum Science and Technology, Chiba, Japan
- Applied MRI Research, Department of Molecular Imaging and Theranostics, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Junko Ota
- Medical Informatics Section, QST Hospital, National Institutes for Quantum Science and Technology, Chiba, Japan
- Applied MRI Research, Department of Molecular Imaging and Theranostics, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Satoshi Higuchi
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Yasutoshi Ohta
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Yasuhiro Nagai
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Keizo Murakawa
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Takayuki Ishida
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Tetsuya Fukuda
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| |
Collapse
|
42
|
Angular Super-Resolution in X-Ray Projection Radiography Using Deep Neural Network: Implementation on Rotational Angiography. Biomed J 2022; 46:154-162. [PMID: 35026475 PMCID: PMC10105049 DOI: 10.1016/j.bj.2022.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 11/30/2021] [Accepted: 01/05/2022] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Rotational angiography acquires radiographs at multiple projection angles to demonstrate superimposed vasculature. However, this comes at the expense of the inherent risk of increased ionizing radiation. In this paper, building upon a successful deep learning model, we developed a novel technique to super-resolve the radiography at different projection angles to reduce the actual projections needed for a diagnosable radiographic procedure. METHODS Ten models were trained for different levels of angular super-resolution (ASR), denoted as ASRN, where for every N+2 frames, the first and the last frames were submitted as inputs to super-resolve the intermediate N frames. RESULTS The results show that large arterial structures were well preserved in all ASR levels. Small arteries were adequately visualized in lower ASR levels but progressively blurred out in higher ASR levels. Noninferiority of image quality was demonstrated in ASR1-4 (99.75% confidence intervals: -0.16-0.03, -0.19-0.04, -0.17-0.01, -0.15-0.05 respectively). CONCLUSIONS ASR technique is capable of super-resolving rotational angiographic frames at intermediate projection angles.
Collapse
|
43
|
Zhang Z, Yu S, Qin W, Liang X, Xie Y, Cao G. Self-supervised CT super-resolution with hybrid model. Comput Biol Med 2021; 138:104775. [PMID: 34666243 DOI: 10.1016/j.compbiomed.2021.104775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 12/19/2022]
Abstract
Software-based methods can improve CT spatial resolution without changing the hardware of the scanner or increasing the radiation dose to the object. In this work, we aim to develop a deep learning (DL) based CT super-resolution (SR) method that can reconstruct low-resolution (LR) sinograms into high-resolution (HR) CT images. We mathematically analyzed imaging processes in the CT SR imaging problem and synergistically integrated the SR model in the sinogram domain and the deblur model in the image domain into a hybrid model (SADIR). SADIR incorporates the CT domain knowledge and is unrolled into a DL network (SADIR-Net). The SADIR-Net is a self-supervised network, which can be trained and tested with a single sinogram. SADIR-Net was evaluated through SR CT imaging of a Catphan700 physical phantom and a real porcine phantom, and its performance was compared to the other state-of-the-art (SotA) DL-based CT SR methods. On both phantoms, SADIR-Net obtains the highest information fidelity criterion (IFC), structure similarity index (SSIM), and lowest root-mean-square-error (RMSE). As to the modulation transfer function (MTF), SADIR-Net also obtains the best result and improves the MTF50% by 69.2% and MTF10% by 69.5% compared with FBP. Alternatively, the spatial resolutions at MTF50% and MTF10% from SADIR-Net can reach 91.3% and 89.3% of the counterparts reconstructed from the HR sinogram with FBP. The results show that SADIR-Net can provide performance comparable to the other SotA methods for CT SR reconstruction, especially in the case of extremely limited training data or even no data at all. Thus, the SADIR method could find use in improving CT resolution without changing the hardware of the scanner or increasing the radiation dose to the object.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, 94305-5847, CA, USA; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shaode Yu
- College of Information and Communication Engineering, Communication University of China, Beijing 100024, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Guohua Cao
- Virginia Polytechnic Institute & State University, Blacksburg, VA 24061, USA.
| |
Collapse
|
44
|
Watanabe S, Sakaguchi K, Murata D, Ishii K. Deep learning-based Hounsfield unit value measurement method for bolus tracking images in cerebral computed tomography angiography. Comput Biol Med 2021; 137:104824. [PMID: 34488029 DOI: 10.1016/j.compbiomed.2021.104824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 08/28/2021] [Accepted: 08/28/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Patient movement during bolus tracking (BT) impairs the accuracy of Hounsfield unit (HU) measurements. This study assesses the accuracy of measuring HU values in the internal carotid artery (ICA) using an original deep learning (DL)-based method as compared with using the conventional region of interest (ROI) setting method. METHOD A total of 722 BT images of 127 patients who underwent cerebral computed tomography angiography were selected retrospectively and divided into groups for training data, validation data, and test data. To segment the ICA using our proposed method, DL was performed using a convolutional neural network. The HU values in the ICA were obtained using our DL-based method and the ROI setting method. The ROI setting was performed with and without correcting for patient body movement (corrected ROI and settled ROI). We compared the proposed DL-based method with settled ROI to evaluate HU value differences from the corrected ROI, based on whether or not patients experienced involuntary movement during BT image acquisition. RESULTS Differences in HU values from the corrected ROI in the settled ROI and the proposed method were 23.8 ± 12.7 HU and 9.0 ± 6.4 HU in patients with body movement and 1.1 ± 1.6 HU and 3.9 ± 4.7 HU in patients without body movement, respectively. There were significant differences in both comparisons (P < 0.01). CONCLUSION DL-based method can improve the accuracy of HU value measurements for ICA in BT images with patient involuntary movement.
Collapse
Affiliation(s)
- Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan; Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kenta Sakaguchi
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Daisuke Murata
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| |
Collapse
|
45
|
Lv Y, Ma H, Li J, Liu S. Fusing dense and ReZero residual networks for super-resolution of retinal images. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.05.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
46
|
Yang P, Xu L, Wan Y, Yang J, Xue Y, Jiang Y, Luo C, Wang J, Niu T. Deep neural network-based approach to improving radiomics analysis reproducibility in liver cancer: effect on image resampling. Phys Med Biol 2021; 66. [PMID: 34293730 DOI: 10.1088/1361-6560/ac16e8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/22/2021] [Indexed: 12/14/2022]
Abstract
Objectives.To test the effect of traditional up-sampling slice thickness (ST) methods on the reproducibility of CT radiomics features of liver tumors and investigate the improvement using a deep neural network (DNN) scheme.Methods.CT images with ≤ 1 mm ST in the public dataset were converted to low-resolution (3 mm, 5 mm) CT images. A DNN model was trained for the conversion from 3 mm ST and 5 mm ST to 1 mm ST and compared with conventional interpolation-based methods (cubic, linear, nearest) using structural similarity (SSIM) and peak-signal-to-noise-ratio (PSNR). Radiomics features were extracted from the tumor and tumor ring regions. The reproducibility of features from images converted using DNN and interpolation schemes were assessed using the concordance correlation coefficients (CCC) with the cutoff of 0.85. The paired t-test and Mann-Whitney U test were used to compare the evaluation metrics, where appropriate.Results.CT images of 108 patients were used for training (n = 63), validation (n = 11) and testing (n = 34). The DNN method showed significantly higher PSNR and SSIM values (p < 0.05) than interpolation-based methods. The DNN method also showed a significantly higher CCC value than interpolation-based methods. For features in the tumor region, compared with the cubic interpolation approach, the reproducible features increased from 393 (82%) to 422(88%) for the conversion of 3-1 mm, and from 305(64%) to 353(74%) for the conversion of 5-1 mm. For features in the tumor ring region, the improvement was from 395 (82%) to 431 (90%) and from 290 (60%) to 335 (70%), respectively.Conclusions.The DNN based ST up-sampling approach can improve the reproducibility of CT radiomics features in liver tumors, promoting the standardization of CT radiomics studies in liver cancer.
Collapse
Affiliation(s)
- Pengfei Yang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Lei Xu
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Yidong Wan
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Jing Yang
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Yi Xue
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Yangkang Jiang
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Chen Luo
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, Guangdong, People's Republic of China
| | - Jing Wang
- Department of Radiation Oncology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine; Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, People's Republic of China
| | - Tianye Niu
- Nuclear & Radiological Engineering and Medical Physics Programs, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, United States of America
| |
Collapse
|
47
|
Kitahara H, Nagatani Y, Otani H, Nakayama R, Kida Y, Sonoda A, Watanabe Y. A novel strategy to develop deep learning for image super-resolution using original ultra-high-resolution computed tomography images of lung as training dataset. Jpn J Radiol 2021; 40:38-47. [PMID: 34318444 PMCID: PMC8315896 DOI: 10.1007/s11604-021-01184-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 07/21/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE To improve the image quality of inflated fixed cadaveric human lungs by utilizing ultra-high-resolution computed tomography (U-HRCT) as a training dataset for super-resolution processing using deep learning (SR-DL). MATERIALS AND METHODS Image data of nine cadaveric human lungs were acquired using U-HRCT. Three different matrix images of U-HRCT images were obtained with two acquisition modes: normal mode (512-matrix image) and super-high-resolution mode (1024- and 2048-matrix image). SR-DL used 512- and 1024-matrix images as training data for deep learning. The virtual 2048-matrix images were acquired by applying SR-DL to the 1024-matrix images. Three independent observers scored normal anatomical structures and abnormal computed tomography (CT) findings of both types of 2048-matrix images on a 3-point scale compared to 1024-matrix images. The image noise values were quantitatively calculated. Moreover, the edge rise distance (ERD) and edge rise slope (ERS) were also calculated using the CT attenuation profile to evaluate margin sharpness. RESULTS The virtual 2048-matrix images significantly improved visualization of normal anatomical structures and abnormal CT findings, except for consolidation and nodules, compared with the conventional 2048-matrix images (p < 0.01). Quantitative noise values were significantly lower in the virtual 2048-matrix images than in the conventional 2048-matrix images (p < 0.001). ERD was significantly shorter in the virtual 2048-matrix images than in the conventional 2048-matrix images (p < 0.01). ERS was significantly higher in the virtual 2048-matrix images than in the conventional 2048-matrix images (p < 0.01). CONCLUSION SR-DL using original U-HRCT images as a training dataset might be a promising tool for image enhancement in terms of margin sharpness and image noise reduction. By applying trained SR-DL to U-HRCT SHR mode images, virtual ultra-high-resolution images were obtained which surpassed the image quality of unmodified SHR mode images.
Collapse
Affiliation(s)
- Hitoshi Kitahara
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan.
| | - Yukihiro Nagatani
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan
| | - Hideji Otani
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan
| | - Ryohei Nakayama
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Yukako Kida
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan
| | - Akinaga Sonoda
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan
| | - Yoshiyuki Watanabe
- Department of Radiology, Shiga University of Medical Science, Seta Tsukinowa-Cho, Otsu, Shiga, 520-2192, Japan
| |
Collapse
|
48
|
Xie H, Lei Y, Wang T, Tian Z, Roper J, Bradley JD, Curran WJ, Tang X, Liu T, Yang X. High through-plane resolution CT imaging with self-supervised deep learning. Phys Med Biol 2021; 66:145013. [PMID: 34049297 DOI: 10.1088/1361-6560/ac0684] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/28/2021] [Indexed: 12/11/2022]
Abstract
CT images for radiotherapy planning are usually acquired in thick slices to reduce the imaging dose, especially for pediatric patients, and to lessen the need for contouring and treatment planning on more slices. However, low through-plane resolution may degrade the accuracy of dose calculations. In this paper, a self-supervised deep learning workflow is proposed to synthesize high through-plane resolution CT images by learning from their high in-plane resolution features. The proposed workflow was designed to facilitate neural networks to learn the mapping from low-resolution (LR) to high-resolution (HR) images in the axial plane. During the inference step, the HR sagittal and coronal images were generated by feeding two parallel-trained neural networks with the respective LR sagittal and coronal images to the trained neural networks. The CT simulation images of a cohort of 75 patients with head and neck cancer (1 mm slice thickness) and 200 CT images of a cohort of 20 lung cancer patients (3 mm slice thickness) were retrospectively investigated in a cross-validation manner. The HR images generated with the proposed method were qualitatively (visual quality, image intensity profiles and preliminary observer study) and quantitatively (mean absolute error, edge keeping index, structural similarity index measurement, information fidelity criterion and visual information fidelity in pixel domain) inspected, while taking the original CT images of the head and neck and lung cancer patients as the reference. The qualitative results showed the capability of the proposed method for generating high through-plane resolution CT images with data from both groups of cancer patients. All the improvements in the measure metrics were confirmed to be statistically significant with paired two-samplet-test analysis. The innovative point of the work is that the proposed deep learning workflow for CT image generation with high through-plane resolution in radiotherapy is self-supervised, meaning that it does not rely on ground truth CT images to train the network. In addition, the assumption that the in-plane HR information can supervise the through-plane HR generation is confirmed. We hope that this will inspire more research on this topic to further improve the through-plane resolution of medical images.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Zhen Tian
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiangyang Tang
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
49
|
Mottola M, Ursprung S, Rundo L, Sanchez LE, Klatte T, Mendichovszky I, Stewart GD, Sala E, Bevilacqua A. Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients. Sci Rep 2021; 11:11542. [PMID: 34078993 PMCID: PMC8172898 DOI: 10.1038/s41598-021-90985-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Accepted: 05/10/2021] [Indexed: 12/19/2022] Open
Abstract
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances.
Collapse
Affiliation(s)
- Margherita Mottola
- Department of Electrical, Electronic, and Information Engineering (DEI), University of Bologna, 40136, Bologna, Italy
- Advanced Research Center on Electronic Systems (ARCES), University of Bologna, 40125, Bologna, Italy
| | - Stephan Ursprung
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Lorena Escudero Sanchez
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Tobias Klatte
- Department of Surgery, University of Cambridge, Cambridge, CB2 0QQ, UK
- Department of Urology, Royal Bournemouth Hospital, Bournemouth, BH7 7DW, UK
| | | | - Grant D Stewart
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
- Department of Surgery, University of Cambridge, Cambridge, CB2 0QQ, UK
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, UK
| | - Alessandro Bevilacqua
- Advanced Research Center on Electronic Systems (ARCES), University of Bologna, 40125, Bologna, Italy.
- Department of Computer Science and Engineering (DISI), University of Bologna, 40136, Bologna, Italy.
| |
Collapse
|
50
|
Sakaguchi K, Kaida H, Yoshida S, Ishii K. Attenuation correction using deep learning for brain perfusion SPECT images. Ann Nucl Med 2021; 35:589-599. [PMID: 33751364 DOI: 10.1007/s12149-021-01600-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 02/15/2021] [Indexed: 12/24/2022]
Abstract
OBJECTIVE Non-uniform attenuation correction using computed tomography (CT) improves the image quality and quantification of single-photon emission computed tomography (SPECT). However, it is not widely used because it requires a SPECT/CT scanner. This study constructs a convolutional neural network (CNN) to generate attenuation-corrected SPECT images directly from non-attenuation-corrected SPECT images. METHODS We constructed an auto-encoder (AE) using a CNN to correct the attenuation in brain perfusion SPECT images. SPECT image datasets of 270 (44,528 slices including augmentation), 60 (5002 slices), and 30 (2558 slices) cases were used for training, validation, and testing, respectively. The acquired projection data were reconstructed in three patterns: uniform attenuation correction using Chang's method (Chang-AC), non-uniform attenuation correction using CT (CT-AC), and no attenuation correction (No-AC). The AE learned an end-to-end mapping between the No-AC and CT-AC images. The No-AC images in the test dataset were loaded into the trained AE, which generated images simulating the CT-AC images as output. The generated SPECT images were employed as attenuation-corrected images using the AE (AE-AC). The accuracy of the AE-AC images was evaluated in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity metric (SSIM). The intensities of the AE-AC and CT-AC images were compared by voxel-by-voxel and region-by-region analysis. RESULTS The PSNRs of the AE-AC and Chang-AC images, compared using CT-AC images, were 62.2, and 57.9, and their SSIM values were 0.9995 and 0.9985, respectively. The AE-AC and CT-AC images were visually and statistically in good agreement. CONCLUSIONS The proposed AE-AC method yields highly accurate attenuation-corrected brain perfusion SPECT images.
Collapse
Affiliation(s)
- Kenta Sakaguchi
- Radiology Center, Kindai University Hospital, 377-2 Ohnohigashi, Osakasayama, Osaka, 589-8511, Japan.
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, 589-8511, Japan.
| | - Hayato Kaida
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, 589-8511, Japan
| | - Shuhei Yoshida
- Radiology Center, Kindai University Hospital, 377-2 Ohnohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, Osaka, 589-8511, Japan
| |
Collapse
|