1
|
Song K, Zhu W, Zhang Z, Liu B, Zhang M, Tang T, Liang J, Wu W. Synthetic lumbar MRI can aid in diagnosis and treatment strategies based on self-pix networks. Sci Rep 2024; 14:20382. [PMID: 39223186 PMCID: PMC11368963 DOI: 10.1038/s41598-024-71288-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024] Open
Abstract
CT and MR tools are commonly used to diagnose lumbar fractures (LF). However, numerous limitations have been found in practice. The aims of this study were to innovate and develop a spinal disease-specific neural network and to evaluate whether synthetic MRI of the LF affected clinical diagnosis and treatment strategies. A total of 675 LF patients who met the inclusion and exclusion criteria were included in the study. For each participant, two mid-sagittal CT and T2-weighted MR images were selected; 1350 pairs of LF images were also included. A new Self-pix based on Pix2pix and Self-Attention was constructed. A total of 1350 pairs of CT and MR images, which were randomly divided into a training group (1147 pairs) and a test group (203 pairs), were fed into Pix2pix and Self-pix. The quantitative evaluation included PSNR and SSIM (PSNR1 and SSIM1: real MR images and Pix2pix-generated MR images; PSNR2 and SSIM2: real MR images and Self-pix-generated MR images). The qualitative evaluation, including accurate diagnosis of acute fractures and accurate selection of treatment strategies based on Self-pix-generated MRI, was performed by three spine surgeons. In the LF group, PSNR1 and PSNR2 were 10.884 and 11.021 (p < 0.001), and SSIM1 and SSIM2 were 0.766 and 0.771 (p < 0.001), respectively. In the ROI group, PSNR1 and PSNR2 were 12.350 and 12.670 (p = 0.004), and SSIM1 and SSIM2 were 0.816 and 0.832 (p = 0.005), respectively. According to the qualitative evaluation, Self-pix-generated MRI showed no significant difference from real MRI in identifying acute fractures (p = 0.689), with a good sensitivity of 84.36% and specificity of 96.65%. No difference in treatment strategy was found between the Self-pix-generated MRI group and the real MRI group (p = 0.135). In this study, a disease-specific GAN named Self-pix was developed, which demonstrated better image generation performance compared to traditional GAN. The spine surgeon could accurately diagnose LF and select treatment strategies based on Self-pix-generated T2 MR images.
Collapse
Affiliation(s)
- Ke Song
- The First College of Clinical Medical Science, China Three Gorges University, Yichang, 443000, China
- Yichang Central People's Hospital, Yichang, 443000, China
| | - Wendong Zhu
- College of Computer and Information Technology, China Three Gorges University, Yichang, 430002, China
| | - Zhenxi Zhang
- School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, 518107, China
| | - Bin Liu
- Wendeng Orthopaedic and Traumatologic Hospital of Shandong Province, Weihai, 264400, China
| | - Meiling Zhang
- The First College of Clinical Medical Science, China Three Gorges University, Yichang, 443000, China
- Yichang Central People's Hospital, Yichang, 443000, China
| | - Tinglong Tang
- College of Computer and Information Technology, China Three Gorges University, Yichang, 430002, China
| | - Jie Liang
- The First College of Clinical Medical Science, China Three Gorges University, Yichang, 443000, China
- Yichang Central People's Hospital, Yichang, 443000, China
| | - Weifei Wu
- The First College of Clinical Medical Science, China Three Gorges University, Yichang, 443000, China.
- Yichang Central People's Hospital, Yichang, 443000, China.
| |
Collapse
|
2
|
Yoon JT, Lee KM, Oh JH, Kim HG, Jeong JW. Insights and Considerations in Development and Performance Evaluation of Generative Adversarial Networks (GANs): What Radiologists Need to Know. Diagnostics (Basel) 2024; 14:1756. [PMID: 39202244 PMCID: PMC11353572 DOI: 10.3390/diagnostics14161756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 08/05/2024] [Indexed: 09/03/2024] Open
Abstract
The rapid development of deep learning in medical imaging has significantly enhanced the capabilities of artificial intelligence while simultaneously introducing challenges, including the need for vast amounts of training data and the labor-intensive tasks of labeling and segmentation. Generative adversarial networks (GANs) have emerged as a solution, offering synthetic image generation for data augmentation and streamlining medical image processing tasks through models such as cGAN, CycleGAN, and StyleGAN. These innovations not only improve the efficiency of image augmentation, reconstruction, and segmentation, but also pave the way for unsupervised anomaly detection, markedly reducing the reliance on labeled datasets. Our investigation into GANs in medical imaging addresses their varied architectures, the considerations for selecting appropriate GAN models, and the nuances of model training and performance evaluation. This paper aims to provide radiologists who are new to GAN technology with a thorough understanding, guiding them through the practical application and evaluation of GANs in brain imaging with two illustrative examples using CycleGAN and pixel2style2pixel (pSp)-combined StyleGAN. It offers a comprehensive exploration of the transformative potential of GANs in medical imaging research. Ultimately, this paper strives to equip radiologists with the knowledge to effectively utilize GANs, encouraging further research and application within the field.
Collapse
Affiliation(s)
- Jeong Taek Yoon
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Kyung Mi Lee
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Jang-Hoon Oh
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Ji Won Jeong
- Department of Medicine, Graduate School, Kyung Hee University, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea;
| |
Collapse
|
3
|
Chen C, Chen Y, Li X, Ning H, Xiao R. Linear semantic transformation for semi-supervised medical image segmentation. Comput Biol Med 2024; 173:108331. [PMID: 38522252 DOI: 10.1016/j.compbiomed.2024.108331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yunqing Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiaoheng Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan, 100024, China.
| |
Collapse
|
4
|
Sica G, Rea G, Scaglione M. Editorial for the Special Issue "Cardiothoracic Imaging: Recent Techniques and Applications in Diagnostics". Diagnostics (Basel) 2024; 14:461. [PMID: 38472934 DOI: 10.3390/diagnostics14050461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 01/25/2024] [Accepted: 01/25/2024] [Indexed: 03/14/2024] Open
Abstract
Technology is making giant strides and is increasingly improving the diagnostic imaging of both frequent and rare acute and chronic diseases [...].
Collapse
Affiliation(s)
- Giacomo Sica
- Department of Radiology, Azienda Ospedaliera dei Colli, Monaldi Hospital, 80131 Naples, Italy
| | - Gaetano Rea
- Department of Radiology, Azienda Ospedaliera dei Colli, Monaldi Hospital, 80131 Naples, Italy
| | - Mariano Scaglione
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| |
Collapse
|
5
|
Zhang R, Turkbey B. Deep Learning Unveils Hidden Angiography in Noncontrast CT Scans. Radiology 2023; 309:e232784. [PMID: 37962504 DOI: 10.1148/radiol.232784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Affiliation(s)
- Ran Zhang
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.Z.); and Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr, Room B3B85, Bethesda, MD 20892 (B.T.)
| | - Baris Turkbey
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.Z.); and Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr, Room B3B85, Bethesda, MD 20892 (B.T.)
| |
Collapse
|