1
|
Kim H, Ryu SM, Keum JS, Oh SI, Kim KN, Shin YH, Jeon IH, Koh KH. Clinical validation of enhanced CT imaging for distal radius fractures through conditional Generative Adversarial Networks (cGAN). PLoS One 2024; 19:e0308346. [PMID: 39150966 PMCID: PMC11329132 DOI: 10.1371/journal.pone.0308346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/22/2024] [Indexed: 08/18/2024] Open
Abstract
BACKGROUND/PURPOSE Distal radius fractures (DRFs) account for approximately 18% of fractures in patients 65 years and older. While plain radiographs are standard, the value of high-resolution computed tomography (CT) for detailed imaging crucial for diagnosis, prognosis, and intervention planning, and increasingly recognized. High-definition 3D reconstructions from CT scans are vital for applications like 3D printing in orthopedics and for the utility of mobile C-arm CT in orthopedic diagnostics. However, concerns over radiation exposure and suboptimal image resolution from some devices necessitate the exploration of advanced computational techniques for refining CT imaging without compromising safety. Therefore, this study aims to utilize conditional Generative Adversarial Networks (cGAN) to improve the resolution of 3 mm CT images (CT enhancement). METHODS Following institutional review board approval, 3 mm-1 mm paired CT data from 11 patients with DRFs were collected. cGAN was used to improve the resolution of 3 mm CT images to match that of 1 mm images (CT enhancement). Two distinct methods were employed for training and generating CT images. In Method 1, a 3 mm CT raw image was used as input with the aim of generating a 1 mm CT raw image. Method 2 was designed to emphasize the difference value between the 3 mm and 1 mm images; using a 3 mm CT raw image as input, it produced the difference in image values between the 3 mm and 1 mm CT scans. Both quantitative metrics, such as peak signal-to-noise ratio (PSNR), mean squared error (MSE), and structural similarity index (SSIM), and qualitative assessments by two orthopedic surgeons were used to evaluate image quality by assessing the grade (1~4, which low number means high quality of resolution). RESULTS Quantitative evaluations showed that our proposed techniques, particularly emphasizing the difference value in Method 2, consistently outperformed traditional approaches in achieving higher image resolution. In qualitative evaluation by two clinicians, images from method 2 showed better quality of images (grade: method 1, 2.7; method 2, 2.2). And more choice was found in method 2 for similar image with 1 mm slice image (15 vs 7, p = 201). CONCLUSION In our study utilizing cGAN for enhancing CT imaging resolution, the authors found that the method, which focuses on the difference value between 3 mm and 1 mm images (Method 2), consistently outperformed.
Collapse
Affiliation(s)
- Hyojune Kim
- Department of Orthopedic Surgery, Hospital of Chung-Ang University of Medicine, Dongjak-gu, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | | | | | | | - Young Ho Shin
- Department of Orthopedic Surgery, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - In-Ho Jeon
- Department of Orthopedic Surgery, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kyoung Hwan Koh
- Department of Orthopedic Surgery, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| |
Collapse
|
2
|
Frazer LL, Louis N, Zbijewski W, Vaishnav J, Clark K, Nicolella DP. Super-resolution of clinical CT: Revealing microarchitecture in whole bone clinical CT image data. Bone 2024; 185:117115. [PMID: 38740120 PMCID: PMC11176006 DOI: 10.1016/j.bone.2024.117115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 05/16/2024]
Abstract
Osteoporotic fractures, prevalent in the elderly, pose a significant health and economic burden. Current methods for predicting fracture risk, primarily relying on bone mineral density, provide only modest accuracy. If better spatial resolution of trabecular bone in a clinical scan were available, a more complete assessment of fracture risk would be obtained using microarchitectural measures of bone (i.e. trabecular thickness, trabecular spacing, bone volume fraction, etc.). However, increased resolution comes at the cost of increased radiation or can only be applied at small volumes of distal skeletal locations. This study explores super-resolution (SR) technology to enhance clinical CT scans of proximal femurs and better reveal the trabecular microarchitecture of bone. Using a deep-learning-based (i.e. subset of artificial intelligence) SR approach, low-resolution clinical CT images were upscaled to higher resolution and compared to corresponding MicroCT-derived images. SR-derived 2-dimensional microarchitectural measurements, such as degree of anisotropy, bone volume fraction, trabecular spacing, and trabecular thickness were within 16 % error compared to MicroCT data, whereas connectivity density exhibited larger error (as high as 1094 %). SR-derived 3-dimensional microarchitectural metrics exhibited errors <18 %. This work showcases the potential of SR technology to enhance clinical bone imaging and holds promise for improving fracture risk assessments and osteoporosis detection. Further research, including larger datasets and refined techniques, can advance SR's clinical utility, enabling comprehensive microstructural assessment across whole bones, thereby improving fracture risk predictions and patient-specific treatment strategies.
Collapse
Affiliation(s)
| | - Nathan Louis
- Southwest Research Institute, USA; University of Michigan, USA
| | | | | | - Kal Clark
- University of Texas Health Science Center at San Antonio, USA
| | | |
Collapse
|
3
|
Li G, Ji D, Chang Y, Tang Z, Cheng D. Successful management of concurrent COVID-19 and Pneumocystis Jirovecii Pneumonia in kidney transplant recipients: a case series. BMC Pulm Med 2023; 23:458. [PMID: 37990199 PMCID: PMC10664536 DOI: 10.1186/s12890-023-02764-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/15/2023] [Indexed: 11/23/2023] Open
Abstract
BACKGROUND Pneumocystis pneumonia (PCP) is a life-threatening pulmonary fungal infection that predominantly affects immunocompromised individuals, including kidney transplant recipients. Recent years have witnessed a rising incidence of PCP in this vulnerable population, leading to graft loss and increased mortality. Immunosuppression, which is essential in transplant recipients, heightens susceptibility to viral and opportunistic infections, magnifying the clinical challenge. Concurrently, the global impact of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been profound. Kidney transplant recipients have faced severe outcomes when infected with SARS-CoV-2, often requiring intensive care. Co-infection with COVID-19 and PCP in this context represents a complex clinical scenario that requires precise management strategies, involving a delicate balance between immunosuppression and immune activation. Although there have been case reports on management of COVID-19 and PCP in kidney transplant recipients, guidance on how to tackle these infections when they occur concurrently remains limited. CASE PRESENTATIONS We have encountered four kidney transplant recipients with concurrent COVID-19 and PCP infection. These patients received comprehensive treatment that included adjustment of their maintenance immunosuppressive regimen, anti-pneumocystis therapy, treatment for COVID-19 and other infections, and symptomatic and supportive care. After this multifaceted treatment strategy, all of these patients improved significantly and had favorable outcomes. CONCLUSIONS We have successfully managed four kidney transplant recipients co-infected with COVID-19 and PCP. While PCP is a known complication of immunosuppressive therapy, its incidence in patients with COVID-19 highlights the complexity of dual infections. Our findings suggest that tailored immunosuppressive regimens, coupled with antiviral and antimicrobial therapies, can lead to clinical improvement in such cases. Further research is needed to refine risk assessment and therapeutic strategies, which will ultimately enhance the care of this vulnerable population.
Collapse
Affiliation(s)
- Guoping Li
- Department of Nephrology, Nanjing Yimin Hospital, Nanjing, 211100, China
| | - Daxi Ji
- National Clinical Research Center of Kidney Diseases, Jinling Hospital, Nanjing University School of Medicine, Nanjing, 200016, China
| | - Youcheng Chang
- Department of Nephrology, Nanjing Yimin Hospital, Nanjing, 211100, China
| | - Zheng Tang
- National Clinical Research Center of Kidney Diseases, Jinling Hospital, Nanjing University School of Medicine, Nanjing, 200016, China
| | - Dongrui Cheng
- National Clinical Research Center of Kidney Diseases, Jinling Hospital, Nanjing University School of Medicine, Nanjing, 200016, China.
| |
Collapse
|
4
|
Lee J, Seo H, Choi YJ, Lee C, Kim S, Lee YS, Lee S, Kim E. An Endodontic forecasting model based on the analysis of preoperative dental radiographs: A pilot study on an endodontic predictive deep neural network. J Endod 2023:S0099-2399(23)00178-4. [PMID: 37019378 DOI: 10.1016/j.joen.2023.03.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 03/25/2023] [Accepted: 03/27/2023] [Indexed: 04/07/2023]
Abstract
INTRODUCTION This study aimed to evaluate the use of deep convolutional neural network (DCNN) algorithms to detect clinical features and predict the three years outcome of endodontic treatment on preoperative periapical radiographs. METHODS A database of single-root premolars that received endodontic treatment or retreatment by endodontists with presence of three years outcome was prepared (n = 598). We constructed a 17-layered DCNN with a self-attention layer (PRESSAN-17), and the model was trained, validated, and tested to 1) detect seven clinical features, i.e., full coverage restoration (FCR), presence of proximal teeth (PRX), coronal defect (COD), root rest (RRS), canal visibility (CAV), previous root filling (PRF), and periapical radiolucency (PAR), and 2) predict the three years endodontic prognosis by analyzing preoperative periapical radiographs as an input. During the prognostication test, a conventional DCNN without a self-attention layer (RESNET-18) was tested for comparison. Accuracy and area under the receiver-operating-characteristic (ROC) curve (AUC) were mainly evaluated for performance comparison. Gradient-weighted class activation mapping (Grad-CAM) was used to visualize weighted heatmaps. RESULTS PRESSAN-17 detected FCR (AUC = 0.975), PRX (0.866), COD (0.672), RRS (0.989), PRF (0.879) and PAR (0.690) significantly, compared to the no-information rate (p<0.05). Comparing the mean accuracy of 5-fold validation of two models, PRESSAN-17 (67.0%) showed a significant difference to RESNET-18 (63.4%, p<0.05). Also, the area under average ROC of PRESSAN-17 was 0.638, which was significantly different compared to the no-information rate. Grad-CAM demonstrated that PRESSAN-17 correctly identified clinical features. CONCLUSIONS Deep convolutional neural networks may aid in the prognostication of endodontic treatment outcome.
Collapse
Affiliation(s)
- Junghoon Lee
- Microscope Center, Department of Conservative, Yonsei University College of Dentistry, Seoul, Korea
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST)
| | - Yoon Jeong Choi
- Department of Orthodontics, The Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, Korea
| | - Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, 50-1 Yonsei-ro, Seodaemun-gu Seoul, Korea
| | - Sunil Kim
- Microscope Center, Department of Conservative Dentistry and Oral Science Research Center, Yonsei University College of Dentistry, Seoul, Korea
| | - Ye Sel Lee
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST)
| | - Sukjoon Lee
- Oral Science Research Center, Yonsei University College of Dentistry, Seoul, Korea
| | - Euiseong Kim
- Microscope Center, Department of Conservative Dentistry and Oral Science Research Center, Yonsei University College of Dentistry, Seoul, Korea
| |
Collapse
|
5
|
Chi J, Sun Z, Han X, Yu X, Wang H, Wu C. PILN: A posterior information learning network for blind reconstruction of lung CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107449. [PMID: 36871547 DOI: 10.1016/j.cmpb.2023.107449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 02/11/2023] [Accepted: 02/24/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer tomography (CT) imaging technology has played significant roles in the diagnosis and treatment of various lung diseases, but the degradations in CT images usually cause the loss of detailed structural information and interrupt the judgement from clinicians. Therefore, reconstructing noise-free, high-resolution CT images with sharp details from degraded ones is of great importance for the computer-assisted diagnosis (CAD) system. However, current image reconstruction methods suffer from unknown parameters of multiple degradations in actual clinical images. METHODS To solve these problems, we propose a unified framework, so called Posterior Information Learning Network (PILN), for blind reconstruction of lung CT images. The framework consists of two stages: Firstly, a noise level learning (NLL) network is proposed to quantify the Gaussian and artifact noise degradations into different levels. Inception-residual modules are designed to extract multi-scale deep features from the noisy image, and residual self-attention structures are proposed to refine deep features to essential representations of noise. Secondly, by taking the estimated noise levels as prior information, a cyclic collaborative super-resolution (CyCoSR) network is proposed to iteratively reconstruct the high-resolution CT image and estimate the blur kernel. Two convolutional modules are designed based on cross-attention transformer structure, named as Reconstructor and Parser. The high-resolution image is restored from the degraded image by the Reconstructor under the guidance of the predicted blur kernel, while the blur kernel is estimated by the Parser according to the reconstructed image and the degraded one. The NLL and CyCoSR networks are formulated as an end-to-end framework to handle multiple degradations simultaneously. RESULTS The proposed PILN is applied to the Cancer Imaging Archive (TCIA) dataset and the Lung Nodule Analysis 2016 Challenge (LUNA16) dataset to evaluate its ability in reconstructing lung CT images. Compared with the state-of-the-art image reconstruction algorithms, it can provide high-resolution images with less noise and sharper details with respect to quantitative benchmarks. CONCLUSIONS Extensive experimental results demonstrate that our proposed PILN can achieve better performance on blind reconstruction of lung CT images, providing noise-free, detail-sharp and high-resolution images without knowing the parameters of multiple degradation sources.
Collapse
Affiliation(s)
- Jianning Chi
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang 110167, China.
| | - Zhiyi Sun
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China
| | - Xiaoying Han
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China
| | - Xiaosheng Yu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China
| | - Huan Wang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110167, China
| |
Collapse
|
6
|
Zhang J, Wang X, Liu J, Zhang D, Lu Y, Zhou Y, Sun L, Hou S, Fan X, Shen S, Zhao J. Multispectral Drone Imagery and SRGAN for Rapid Phenotypic Mapping of Individual Chinese Cabbage Plants. PLANT PHENOMICS (WASHINGTON, D.C.) 2022; 2022:0007. [PMID: 37266137 PMCID: PMC10230957 DOI: 10.34133/plantphenomics.0007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 11/07/2022] [Indexed: 06/03/2023]
Abstract
The phenotypic parameters of crop plants can be evaluated accurately and quickly using an unmanned aerial vehicle (UAV) equipped with imaging equipment. In this study, hundreds of images of Chinese cabbage (Brassica rapa L. ssp. pekinensis) germplasm resources were collected with a low-cost UAV system and used to estimate cabbage width, length, and relative chlorophyll content (soil plant analysis development [SPAD] value). The super-resolution generative adversarial network (SRGAN) was used to improve the resolution of the original image, and the semantic segmentation network Unity Networking (UNet) was used to process images for the segmentation of each individual Chinese cabbage. Finally, the actual length and width were calculated on the basis of the pixel value of the individual cabbage and the ground sampling distance. The SPAD value of Chinese cabbage was also analyzed on the basis of an RGB image of a single cabbage after background removal. After comparison of various models, the model in which visible images were enhanced with SRGAN showed the best performance. With the validation set and the UNet model, the segmentation accuracy was 94.43%. For Chinese cabbage dimensions, the model was better at estimating length than width. The R2 of the visible-band model with images enhanced using SRGAN was greater than 0.84. For SPAD prediction, the R2 of the model with images enhanced with SRGAN was greater than 0.78. The root mean square errors of the 3 semantic segmentation network models were all less than 2.18. The results showed that the width, length, and SPAD value of Chinese cabbage predicted using UAV imaging were comparable to those obtained from manual measurements in the field. Overall, this research demonstrates not only that UAVs are useful for acquiring quantitative phenotypic data on Chinese cabbage but also that a regression model can provide reliable SPAD predictions. This approach offers a reliable and convenient phenotyping tool for the investigation of Chinese cabbage breeding traits.
Collapse
Affiliation(s)
- Jun Zhang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Xinxin Wang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- Mountain Area Research Institute, Hebei Agricultural University, 071001 Baoding, China
| | - Jingyan Liu
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Dongfang Zhang
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Yin Lu
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Yuhong Zhou
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Lei Sun
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Shenglin Hou
- Hebei Academy of Agriculture and Forestry Sciences, 050000 Shijiazhuang, China
| | - Xiaofei Fan
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, 071000 Baoding, China
| | - Shuxing Shen
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| | - Jianjun Zhao
- State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University, 071000 Baoding, China
- College of Horticulture, Hebei Agricultural University, 071000 Baoding, China
| |
Collapse
|
7
|
Prospects of Structural Similarity Index for Medical Image Analysis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083754] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
An image quality matrix provides a significant principle for objectively observing an image based on an alteration between the original and distorted images. During the past two decades, a novel universal image quality assessment has been developed with the ability of adaptation with human visual perception for measuring the difference of a degraded image from the reference image, namely a structural similarity index. Structural similarity has since been widely used in various sectors, including medical image evaluation. Although numerous studies have reported the use of structural similarity as an evaluation strategy for computer-based medical images, reviews on the prospects of using structural similarity for medical imaging applications have been rare. This paper presents previous studies implementing structural similarity in analyzing medical images from various imaging modalities. In addition, this review describes structural similarity from the perspective of a family’s historical background, as well as progress made from the original to the recent structural similarity, and its strengths and drawbacks. Additionally, potential research directions in applying such similarities related to medical image analyses are described. This review will be beneficial in guiding researchers toward the discovery of potential medical image examination methods that can be improved through structural similarity index.
Collapse
|