1
|
Yang T, Zhang L, Sun S, Yao X, Wang L, Ge Y. Identifying severe community-acquired pneumonia using radiomics and clinical data: a machine learning approach. Sci Rep 2024; 14:21884. [PMID: 39300101 DOI: 10.1038/s41598-024-72310-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 09/05/2024] [Indexed: 09/22/2024] Open
Abstract
Evaluating Community-Acquired Pneumonia (CAP) is crucial for determining appropriate treatment methods. In this study, we established a machine learning model using radiomics and clinical features to rapidly and accurately identify Severe Community-Acquired Pneumonia (SCAP). A total of 174 CAP patients were included in the study, with 64 cases classified as SCAP. Radiomic features were extracted from chest CT scans using radiomics techniques, and screened to remove irrelevant features. Additionally, clinical indicators of patients were similarly screened and constituted the clinical feature set. Subsequently, eight common machine learning models were employed to complete the SCAP identification task. Specifically, interpretability analysis was conducted on the models. In the end, we screened out 15 radiomic features (such as LeastAxisLength, Maximum2DDiameterColumn and ZonePercentage) and two clinical features: Lymphocyte (p = 0.041) and Albumin (p = 0.044). Using radiomic features as inputs in model predictions yielded the highest AUC of 0.85 on the test set. When using the clinical feature set as model inputs, the AUC was 0.82. Combining the two sets of features as model inputs, Ada Boost achieved the best performance with an AUC of 0.89. Our study demonstrates that combining radiomics and clinical data using machine learning methods can more accurately identify SCAP patients.
Collapse
Affiliation(s)
- Tianning Yang
- College of Science, North China University of Science and Technology, Tangshan, Hebei, China
| | - Ling Zhang
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Siyi Sun
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Xuexin Yao
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Lichuan Wang
- College of Science, North China University of Science and Technology, Tangshan, Hebei, China.
| | - Yanlei Ge
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China.
| |
Collapse
|
2
|
Xu C, Guo X, Yang G, Cui Y, Su L, Dong H, Hu X, Che S. Prior-guided attention fusion transformer for multi-lesion segmentation of diabetic retinopathy. Sci Rep 2024; 14:20892. [PMID: 39245695 PMCID: PMC11381548 DOI: 10.1038/s41598-024-71650-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 08/29/2024] [Indexed: 09/10/2024] Open
Abstract
To solve the issue of diagnosis accuracy of diabetic retinopathy (DR) and reduce the workload of ophthalmologists, in this paper we propose a prior-guided attention fusion Transformer for multi-lesion segmentation of DR. An attention fusion module is proposed to improve the key generator to integrate self-attention and cross-attention and reduce the introduction of noise. The self-attention focuses on lesions themselves, capturing the correlation of lesions at a global scale, while the cross-attention, using pre-trained vessel masks as prior knowledge, utilizes the correlation between lesions and vessels to reduce the ambiguity of lesion detection caused by complex fundus structures. A shift block is introduced to expand association areas between lesions and vessels further and to enhance the sensitivity of the model to small-scale structures. To dynamically adjust the model's perception of features at different scales, we propose the scale-adaptive attention to adaptively learn fusion weights of feature maps at different scales in the decoder, capturing features and details more effectively. The experimental results on two public datasets (DDR and IDRiD) demonstrate that our model outperforms other state-of-the-art models for multi-lesion segmentation.
Collapse
Affiliation(s)
- Chenfangqian Xu
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Xiaoxin Guo
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China.
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China.
| | - Guangqi Yang
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Yihao Cui
- College of Software, Jilin University, Changchun, 130012, China
| | - Longchen Su
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Hongliang Dong
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Xiaoying Hu
- Ophthalmology Department, Bethune First Hospital of Jilin University, Changchun, 130021, China
| | - Songtian Che
- Ophthalmology Department, Bethune Second Hospital of Jilin University, Changchun, 130041, China
| |
Collapse
|
3
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
4
|
Yoo SJ, Kim H, Witanto JN, Inui S, Yoon JH, Lee KD, Choi YW, Goo JM, Yoon SH. Generative adversarial network for automatic quantification of Coronavirus disease 2019 pneumonia on chest radiographs. Eur J Radiol 2023; 164:110858. [PMID: 37209462 DOI: 10.1016/j.ejrad.2023.110858] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/10/2023] [Accepted: 04/29/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.
Collapse
Affiliation(s)
- Seung-Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | | | - Shohei Inui
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Department of Radiology, Japan Self-Defense Forces Central Hospital, Tokyo, Japan
| | - Jeong-Hwa Yoon
- Institute of Health Policy and Management, Medical Research Center, Seoul National University, Seoul, South Korea
| | - Ki-Deok Lee
- Division of Infectious diseases, Department of Internal Medicine, Myongji Hospital, Goyang, Korea
| | - Yo Won Choi
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; MEDICALIP Co. Ltd., Seoul, Korea
| |
Collapse
|
5
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
6
|
Pneumonia Detection on Chest X-ray Images Using Ensemble of Deep Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136448] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Pneumonia is a life-threatening lung infection resulting from several different viral infections. Identifying and treating pneumonia on chest X-ray images can be difficult due to its similarity to other pulmonary diseases. Thus, the existing methods for predicting pneumonia cannot attain substantial levels of accuracy. This paper presents a computer-aided classification of pneumonia, coined Ensemble Learning (EL), to simplify the diagnosis process on chest X-ray images. Our proposal is based on Convolutional Neural Network (CNN) models , which are pretrained CNN models that have been recently employed to enhance the performance of many medical tasks instead of training CNN models from scratch. We propose to use three well-known CNNs (DenseNet169, MobileNetV2, and Vision Transformer) pretrained using the ImageNet database.
Collapse
|