1
|
Rajaraman S, Liang Z, Xue Z, Antani S. Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification. Front Artif Intell 2024; 7:1419638. [PMID: 39301479 PMCID: PMC11410760 DOI: 10.3389/frai.2024.1419638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 08/27/2024] [Indexed: 09/22/2024] Open
Abstract
Introduction Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture. Methods This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a VGG-16-Sharp-U-Net architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, viz., the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index. Results Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, viz., Baseline, and achieve significantly higher sensitivity (p < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327). Discussion The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.
Collapse
Affiliation(s)
- Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
2
|
Dai T, Zhang R, Hong F, Yao J, Zhang Y, Wang Y. UniChest: Conquer-and-Divide Pre-Training for Multi-Source Chest X-Ray Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2901-2912. [PMID: 38526891 DOI: 10.1109/tmi.2024.3381123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Vision-Language Pre-training (VLP) that utilizes the multi-modal information to promote the training efficiency and effectiveness, has achieved great success in vision recognition of natural domains and shown promise in medical imaging diagnosis for the Chest X-Rays (CXRs). However, current works mainly pay attention to the exploration on single dataset of CXRs, which locks the potential of this powerful paradigm on larger hybrid of multi-source CXRs datasets. We identify that although blending samples from the diverse sources offers the advantages to improve the model generalization, it is still challenging to maintain the consistent superiority for the task of each source due to the existing heterogeneity among sources. To handle this dilemma, we design a Conquer-and-Divide pre-training framework, termed as UniChest, aiming to make full use of the collaboration benefit of multiple sources of CXRs while reducing the negative influence of the source heterogeneity. Specially, the "Conquer" stage in UniChest encourages the model to sufficiently capture multi-source common patterns, and the "Divide" stage helps squeeze personalized patterns into different small experts (query networks). We conduct thorough experiments on many benchmarks, e.g., ChestX-ray14, CheXpert, Vindr-CXR, Shenzhen, Open-I and SIIM-ACR Pneumothorax, verifying the effectiveness of UniChest over a range of baselines, and release our codes and pre-training models at https://github.com/Elfenreigen/UniChest.
Collapse
|
3
|
Cho K, Kim KD, Jeong J, Nam Y, Kim J, Choi C, Lee S, Hong GS, Seo JB, Kim N. Approximating Intermediate Feature Maps of Self-Supervised Convolution Neural Network to Learn Hard Positive Representations in Chest Radiography. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1375-1385. [PMID: 38381382 PMCID: PMC11300846 DOI: 10.1007/s10278-024-01032-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/22/2024]
Abstract
Recent advances in contrastive learning have significantly improved the performance of deep learning models. In contrastive learning of medical images, dealing with positive representation is sometimes difficult because some strong augmentation techniques can disrupt contrastive learning owing to the subtle differences between other standardized CXRs compared to augmented positive pairs; therefore, additional efforts are required. In this study, we propose intermediate feature approximation (IFA) loss, which improves the performance of contrastive convolutional neural networks by focusing more on positive representations of CXRs without additional augmentations. The IFA loss encourages the feature maps of a query image and its positive pair to resemble each other by maximizing the cosine similarity between the intermediate feature outputs of the original data and the positive pairs. Therefore, we used the InfoNCE loss, which is commonly used loss to address negative representations, and the IFA loss, which addresses positive representations, together to improve the contrastive network. We evaluated the performance of the network using various downstream tasks, including classification, object detection, and a generative adversarial network (GAN) inversion task. The downstream task results demonstrated that IFA loss can improve the performance of effectively overcoming data imbalance and data scarcity; furthermore, it can serve as a perceptual loss encoder for GAN inversion. In addition, we have made our model publicly available to facilitate access and encourage further research and collaboration in the field.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Jiheon Jeong
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Yujin Nam
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Jeeyoung Kim
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Changyong Choi
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Soyoung Lee
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea.
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Siddiqi R, Javaid S. Deep Learning for Pneumonia Detection in Chest X-ray Images: A Comprehensive Survey. J Imaging 2024; 10:176. [PMID: 39194965 DOI: 10.3390/jimaging10080176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/15/2024] [Accepted: 07/19/2024] [Indexed: 08/29/2024] Open
Abstract
This paper addresses the significant problem of identifying the relevant background and contextual literature related to deep learning (DL) as an evolving technology in order to provide a comprehensive analysis of the application of DL to the specific problem of pneumonia detection via chest X-ray (CXR) imaging, which is the most common and cost-effective imaging technique available worldwide for pneumonia diagnosis. This paper in particular addresses the key period associated with COVID-19, 2020-2023, to explain, analyze, and systematically evaluate the limitations of approaches and determine their relative levels of effectiveness. The context in which DL is applied as both an aid to and an automated substitute for existing expert radiography professionals, who often have limited availability, is elaborated in detail. The rationale for the undertaken research is provided, along with a justification of the resources adopted and their relevance. This explanatory text and the subsequent analyses are intended to provide sufficient detail of the problem being addressed, existing solutions, and the limitations of these, ranging in detail from the specific to the more general. Indeed, our analysis and evaluation agree with the generally held view that the use of transformers, specifically, vision transformers (ViTs), is the most promising technique for obtaining further effective results in the area of pneumonia detection using CXR images. However, ViTs require extensive further research to address several limitations, specifically the following: biased CXR datasets, data and code availability, the ease with which a model can be explained, systematic methods of accurate model comparison, the notion of class imbalance in CXR datasets, and the possibility of adversarial attacks, the latter of which remains an area of fundamental research.
Collapse
Affiliation(s)
- Raheel Siddiqi
- Computer Science Department, Karachi Campus, Bahria University, Karachi 73500, Pakistan
| | - Sameena Javaid
- Computer Science Department, Karachi Campus, Bahria University, Karachi 73500, Pakistan
| |
Collapse
|
5
|
Bennour A, Ben Aoun N, Khalaf OI, Ghabban F, Wong WK, Algburi S. Contribution to pulmonary diseases diagnostic from X-ray images using innovative deep learning models. Heliyon 2024; 10:e30308. [PMID: 38707425 PMCID: PMC11068804 DOI: 10.1016/j.heliyon.2024.e30308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/09/2024] [Accepted: 04/23/2024] [Indexed: 05/07/2024] Open
Abstract
Pulmonary disease identification and characterization are among the most intriguing research topics of recent years since they require an accurate and prompt diagnosis. Although pulmonary radiography has helped in lung disease diagnosis, the interpretation of the radiographic image has always been a major concern for doctors and radiologists to reduce diagnosis errors. Due to their success in image classification and segmentation tasks, cutting-edge artificial intelligence techniques like machine learning (ML) and deep learning (DL) are widely encouraged to be applied in the field of diagnosing lung disorders and identifying them using medical images, particularly radiographic ones. For this end, the researchers are concurring to build systems based on these techniques in particular deep learning ones. In this paper, we proposed three deep-learning models that were trained to identify the presence of certain lung diseases using thoracic radiography. The first model, named "CovCXR-Net", identifies the COVID-19 disease (two cases: COVID-19 or normal). The second model, named "MDCXR3-Net", identifies the COVID-19 and pneumonia diseases (three cases: COVID-19, pneumonia, or normal), and the last model, named "MDCXR4-Net", is destined to identify the COVID-19, pneumonia and the pulmonary opacity diseases (4 cases: COVID-19, pneumonia, pulmonary opacity or normal). These models have proven their superiority in comparison with the state-of-the-art models and reached an accuracy of 99,09 %, 97.74 %, and 90,37 % respectively with three benchmarks.
Collapse
Affiliation(s)
- Akram Bennour
- LAMIS Laboratiry, Echahid Cheikh Larbi Tebessi University, Tebessa, Algeria
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al-Baha University, Al Baha, Saudi Arabia
- REGIM-Lab: Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Tunisia
| | - Osamah Ibrahim Khalaf
- Department of Solar, Al-Nahrain Research Center for Renewable Energy, Al-Nahrain University, Jadriya, Baghdad, Iraq
| | - Fahad Ghabban
- College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | | | - Sameer Algburi
- Al-Kitab University, College of Engineering Techniques, Kirkuk, Iraq
| |
Collapse
|
6
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
7
|
Cheng CT, Kuo LW, Ouyang CH, Hsu CP, Lin WC, Fu CY, Kang SC, Liao CH. Development and evaluation of a deep learning-based model for simultaneous detection and localization of rib and clavicle fractures in trauma patients' chest radiographs. Trauma Surg Acute Care Open 2024; 9:e001300. [PMID: 38646620 PMCID: PMC11029226 DOI: 10.1136/tsaco-2023-001300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Purpose To develop a rib and clavicle fracture detection model for chest radiographs in trauma patients using a deep learning (DL) algorithm. Materials and methods We retrospectively collected 56 145 chest X-rays (CXRs) from trauma patients in a trauma center between August 2008 and December 2016. A rib/clavicle fracture detection DL algorithm was trained using this data set with 991 (1.8%) images labeled by experts with fracture site locations. The algorithm was tested on independently collected 300 CXRs in 2017. An external test set was also collected from hospitalized trauma patients in a regional hospital for evaluation. The receiver operating characteristic curve with area under the curve (AUC), accuracy, sensitivity, specificity, precision, and negative predictive value of the model on each test set was evaluated. The prediction probability on the images was visualized as heatmaps. Results The trained DL model achieved an AUC of 0.912 (95% CI 87.8 to 94.7) on the independent test set. The accuracy, sensitivity, and specificity on the given cut-off value are 83.7, 86.8, and 80.4, respectively. On the external test set, the model had a sensitivity of 88.0 and an accuracy of 72.5. While the model exhibited a slight decrease in accuracy on the external test set, it maintained its sensitivity in detecting fractures. Conclusion The algorithm detects rib and clavicle fractures concomitantly in the CXR of trauma patients with high accuracy in locating lesions through heatmap visualization.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Ling-Wei Kuo
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chun-Hsiang Ouyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chi-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Wei-Cheng Lin
- Department of Electrical Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| |
Collapse
|
8
|
Cho K, Kim J, Kim KD, Park S, Kim J, Yun J, Ahn Y, Oh SY, Lee SM, Seo JB, Kim N. MuSiC-ViT: A multi-task Siamese convolutional vision transformer for differentiating change from no-change in follow-up chest radiographs. Med Image Anal 2023; 89:102894. [PMID: 37562256 DOI: 10.1016/j.media.2023.102894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 06/29/2023] [Accepted: 07/07/2023] [Indexed: 08/12/2023]
Abstract
A major responsibility of radiologists in routine clinical practice is to read follow-up chest radiographs (CXRs) to identify changes in a patient's condition. Diagnosing meaningful changes in follow-up CXRs is challenging because radiologists must differentiate disease changes from natural or benign variations. Here, we suggest using a multi-task Siamese convolutional vision transformer (MuSiC-ViT) with an anatomy-matching module (AMM) to mimic the radiologist's cognitive process for differentiating baseline change from no-change. MuSiC-ViT uses the convolutional neural networks (CNNs) meet vision transformers model that combines CNN and transformer architecture. It has three major components: a Siamese network architecture, an AMM, and multi-task learning. Because the input is a pair of CXRs, a Siamese network was adopted for the encoder. The AMM is an attention module that focuses on related regions in the CXR pairs. To mimic a radiologist's cognitive process, MuSiC-ViT was trained using multi-task learning, normal/abnormal and change/no-change classification, and anatomy-matching. Among 406 K CXRs studied, 88 K change and 115 K no-change pairs were acquired for the training dataset. The internal validation dataset consisted of 1,620 pairs. To demonstrate the robustness of MuSiC-ViT, we verified the results with two other validation datasets. MuSiC-ViT respectively achieved accuracies and area under the receiver operating characteristic curves of 0.728 and 0.797 on the internal validation dataset, 0.614 and 0.784 on the first external validation dataset, and 0.745 and 0.858 on a second temporally separated validation dataset. All code is available at https://github.com/chokyungjin/MuSiC-ViT.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jeeyoung Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Seungju Park
- Department of Biomedical Engineering, College of Health Sciences, Korea University, Seoul, Republic of Korea
| | - Junsik Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jihye Yun
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yura Ahn
- Department of Radiology, and Research of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Sang Young Oh
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology, University of Ulsan College of Medicine and Asan Medical Center, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Mukherjee P, Hou B, Lanfredi RB, Summers RM. Feasibility of Using the Privacy-preserving Large Language Model Vicuna for Labeling Radiology Reports. Radiology 2023; 309:e231147. [PMID: 37815442 PMCID: PMC10623189 DOI: 10.1148/radiol.231147] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/15/2023] [Accepted: 08/16/2023] [Indexed: 10/11/2023]
Abstract
Background Large language models (LLMs) such as ChatGPT, though proficient in many text-based tasks, are not suitable for use with radiology reports due to patient privacy constraints. Purpose To test the feasibility of using an alternative LLM (Vicuna-13B) that can be run locally for labeling radiography reports. Materials and Methods Chest radiography reports from the MIMIC-CXR and National Institutes of Health (NIH) data sets were included in this retrospective study. Reports were examined for 13 findings. Outputs reporting the presence or absence of the 13 findings were generated by Vicuna by using a single-step or multistep prompting strategy (prompts 1 and 2, respectively). Agreements between Vicuna outputs and CheXpert and CheXbert labelers were assessed using Fleiss κ. Agreement between Vicuna outputs from three runs under a hyperparameter setting that introduced some randomness (temperature, 0.7) was also assessed. The performance of Vicuna and the labelers was assessed in a subset of 100 NIH reports annotated by a radiologist with use of area under the receiver operating characteristic curve (AUC). Results A total of 3269 reports from the MIMIC-CXR data set (median patient age, 68 years [IQR, 59-79 years]; 161 male patients) and 25 596 reports from the NIH data set (median patient age, 47 years [IQR, 32-58 years]; 1557 male patients) were included. Vicuna outputs with prompt 2 showed, on average, moderate to substantial agreement with the labelers on the MIMIC-CXR (κ median, 0.57 [IQR, 0.45-0.66] with CheXpert and 0.64 [IQR, 0.45-0.68] with CheXbert) and NIH (κ median, 0.52 [IQR, 0.41-0.65] with CheXpert and 0.55 [IQR, 0.41-0.74] with CheXbert) data sets, respectively. Vicuna with prompt 2 performed at par (median AUC, 0.84 [IQR, 0.74-0.93]) with both labelers on nine of 11 findings. Conclusion In this proof-of-concept study, outputs of the LLM Vicuna reporting the presence or absence of 13 findings on chest radiography reports showed moderate to substantial agreement with existing labelers. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Cai in this issue.
Collapse
Affiliation(s)
- Pritam Mukherjee
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Benjamin Hou
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Ricardo B. Lanfredi
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Ronald M. Summers
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| |
Collapse
|
10
|
Shi TL, Zhang YF, Yao MX, Li C, Wang HC, Ren C, Bai JS, Cui X, Chen W. Global trends and hot topics in clinical applications of perovskite materials: a bibliometric analysis. BIOMATERIALS TRANSLATIONAL 2023; 4:131-141. [PMID: 38283088 PMCID: PMC10817784 DOI: 10.12336/biomatertransl.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/09/2023] [Accepted: 09/12/2023] [Indexed: 01/30/2024]
Abstract
In recent years, perovskite has received increasing attention in the medical field. However, there has been a lack of related bibliometric analysis in this research field. This study aims to analyse the research status and hot topics of perovskite in the medical field from a bibliometric perspective and explore the research direction of perovskite. This study collected 1852 records of perovskite research in the medical field from 1983 to 2022 in the Web of Science (WOS) database. The country, institution, journal, cited references, and keywords were analysed using CiteSpace, VOS viewer, and Bibliometrix software. The number of articles related to perovskite research in the medical field has been increasing every year. China and USA have published the most papers and are the main forces in this research field. The University of London Imperial College of Science, Technology, and Medicine is the most active institution and has contributed the most publications. ACS Applied Materials & Interfaces is the most prolific journal in this field. "Medical electronic devices", "X-rays", and "piezoelectric materials" are the most researched directions of perovskite in the medical field. "Performance", "perovskite", and "solar cells" are the most frequently used keywords in this field. Advanced Materials is the most relevant and academically influential journal for perovskite research. Halide perovskites have been a hot topic in this field in recent years and will be a future research trend. X-ray, electronic medical equipment, and medical stents are the main research directions.
Collapse
Affiliation(s)
- Tai-Long Shi
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Yi-Fan Zhang
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Meng-Xuan Yao
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Chao Li
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Hai-Cheng Wang
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Chuan Ren
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Jun-Sheng Bai
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| | - Xu Cui
- Center for Human Tissues and Organs Degeneration, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, China
| | - Wei Chen
- Department of Orthopaedic Surgery, Third Hospital of Hebei Medical University, Shijiazhuang, Hebei Province, China
- Key Laboratory of Biomechanics of Hebei Province, Shijiazhuang, Hebei Province, China
- NHC Key Laboratory of Intelligent Orthopaedic Equipment, Shijiazhuang, Hebei Province, China
| |
Collapse
|
11
|
Mustafa Z, Nsour H. Using Computer Vision Techniques to Automatically Detect Abnormalities in Chest X-rays. Diagnostics (Basel) 2023; 13:2979. [PMID: 37761345 PMCID: PMC10530162 DOI: 10.3390/diagnostics13182979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/23/2023] [Accepted: 08/07/2023] [Indexed: 09/29/2023] Open
Abstract
Our research focused on creating an advanced machine-learning algorithm that accurately detects anomalies in chest X-ray images to provide healthcare professionals with a reliable tool for diagnosing various lung conditions. To achieve this, we analysed a vast collection of X-ray images and utilised sophisticated visual analysis techniques; such as deep learning (DL) algorithms, object recognition, and categorisation models. To create our model, we used a large training dataset of chest X-rays, which provided valuable information for visualising and categorising abnormalities. We also utilised various data augmentation methods; such as scaling, rotation, and imitation; to increase the diversity of images used for training. We adopted the widely used You Only Look Once (YOLO) v8 algorithm, an object recognition paradigm that has demonstrated positive outcomes in computer vision applications, and modified it to classify X-ray images into distinct categories; such as respiratory infections, tuberculosis (TB), and lung nodules. It was particularly effective in identifying unique and crucial outcomes that may, otherwise, be difficult to detect using traditional diagnostic methods. Our findings demonstrate that healthcare practitioners can reliably use machine learning (ML) algorithms to diagnose various lung disorders with greater accuracy and efficiency.
Collapse
Affiliation(s)
- Zaid Mustafa
- Department of Computer Information Systems, Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt 19117, Jordan
| | - Heba Nsour
- Department of Computer Science, Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt 19117, Jordan
| |
Collapse
|
12
|
Mabrouk A, Díaz Redondo RP, Abd Elaziz M, Kayed M. Ensemble Federated Learning: An approach for collaborative pneumonia diagnosis. Appl Soft Comput 2023; 144:110500. [DOI: 10.1016/j.asoc.2023.110500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
13
|
Hong S, Hwang EJ, Kim S, Song J, Lee T, Jo GD, Choi Y, Park CM, Goo JM. Methods of Visualizing the Results of an Artificial-Intelligence-Based Computer-Aided Detection System for Chest Radiographs: Effect on the Diagnostic Performance of Radiologists. Diagnostics (Basel) 2023; 13:diagnostics13061089. [PMID: 36980397 PMCID: PMC10046978 DOI: 10.3390/diagnostics13061089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 03/02/2023] [Accepted: 03/12/2023] [Indexed: 03/16/2023] Open
Abstract
It is unclear whether the visualization methods for artificial-intelligence-based computer-aided detection (AI-CAD) of chest radiographs influence the accuracy of readers’ interpretation. We aimed to evaluate the accuracy of radiologists’ interpretations of chest radiographs using different visualization methods for the same AI-CAD. Initial chest radiographs of patients with acute respiratory symptoms were retrospectively collected. A commercialized AI-CAD using three different methods of visualizing was applied: (a) closed-line method, (b) heat map method, and (c) combined method. A reader test was conducted with five trainee radiologists over three interpretation sessions. In each session, the chest radiographs were interpreted using AI-CAD with one of the three visualization methods in random order. Examination-level sensitivity and accuracy, and lesion-level detection rates for clinically significant abnormalities were evaluated for the three visualization methods. The sensitivity (p = 0.007) and accuracy (p = 0.037) of the combined method are significantly higher than that of the closed-line method. Detection rates using the heat map method (p = 0.043) and the combined method (p = 0.004) are significantly higher than those using the closed-line method. The methods for visualizing AI-CAD results for chest radiographs influenced the performance of radiologists’ interpretations. Combining the closed-line and heat map methods for visualizing AI-CAD results led to the highest sensitivity and accuracy of radiologists.
Collapse
Affiliation(s)
- Sungho Hong
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Correspondence: ; Tel.: +82-2-2072-2057
| | - Soojin Kim
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Jiyoung Song
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Taehee Lee
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Gyeong Deok Jo
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Yelim Choi
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03082, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03082, Republic of Korea
| |
Collapse
|
14
|
Shaheed K, Szczuko P, Abbas Q, Hussain A, Albathan M. Computer-Aided Diagnosis of COVID-19 from Chest X-ray Images Using Hybrid-Features and Random Forest Classifier. Healthcare (Basel) 2023; 11:healthcare11060837. [PMID: 36981494 PMCID: PMC10047954 DOI: 10.3390/healthcare11060837] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/16/2023] Open
Abstract
In recent years, a lot of attention has been paid to using radiology imaging to automatically find COVID-19. (1) Background: There are now a number of computer-aided diagnostic schemes that help radiologists and doctors perform diagnostic COVID-19 tests quickly, accurately, and consistently. (2) Methods: Using chest X-ray images, this study proposed a cutting-edge scheme for the automatic recognition of COVID-19 and pneumonia. First, a pre-processing method based on a Gaussian filter and logarithmic operator is applied to input chest X-ray (CXR) images to improve the poor-quality images by enhancing the contrast, reducing the noise, and smoothing the image. Second, robust features are extracted from each enhanced chest X-ray image using a Convolutional Neural Network (CNNs) transformer and an optimal collection of grey-level co-occurrence matrices (GLCM) that contain features such as contrast, correlation, entropy, and energy. Finally, based on extracted features from input images, a random forest machine learning classifier is used to classify images into three classes, such as COVID-19, pneumonia, or normal. The predicted output from the model is combined with Gradient-weighted Class Activation Mapping (Grad-CAM) visualisation for diagnosis. (3) Results: Our work is evaluated using public datasets with three different train–test splits (70–30%, 80–20%, and 90–10%) and achieved an average accuracy, F1 score, recall, and precision of 97%, 96%, 96%, and 96%, respectively. A comparative study shows that our proposed method outperforms existing and similar work. The proposed approach can be utilised to screen COVID-19-infected patients effectively. (4) Conclusions: A comparative study with the existing methods is also performed. For performance evaluation, metrics such as accuracy, sensitivity, and F1-measure are calculated. The performance of the proposed method is better than that of the existing methodologies, and it can thus be used for the effective diagnosis of the disease.
Collapse
Affiliation(s)
- Kashif Shaheed
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Piotr Szczuko
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
- Correspondence: ; Tel.: +966-503451575
| |
Collapse
|
15
|
Han Y, Holste G, Ding Y, Tewfik A, Peng Y, Wang Z. Radiomics-Guided Global-Local Transformer for Weakly Supervised Pathology Localization in Chest X-Rays. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:750-761. [PMID: 36288235 PMCID: PMC10081959 DOI: 10.1109/tmi.2022.3217218] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Before the recent success of deep learning methods for automated medical image analysis, practitioners used handcrafted radiomic features to quantitatively describe local patches of medical images. However, extracting discriminative radiomic features relies on accurate pathology localization, which is difficult to acquire in real-world settings. Despite advances in disease classification and localization from chest X-rays, many approaches fail to incorporate clinically-informed domainspecific radiomic features. For these reasons, we propose a Radiomics-Guided Transformer (RGT) that fuses global image information with local radiomics-guided auxiliary information to provide accurate cardiopulmonary pathology localization and classification without any bounding box annotations. RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomics information. Using the learned self-attention of its image branch, RGT extracts a bounding box for which to compute radiomic features, which are further processed by the radiomics branch; learned image and radiomic features are then fused and mutually interact via cross-attention layers. Thus, RGT utilizes a novel end-to-end feedback loop that can bootstrap accurate pathology localization only using image-level disease labels. Experiments on the NIH ChestXRay dataset demonstrate that RGT outperforms prior works in weakly supervised disease localization (by an average margin of 3.6% over various intersection-over-union thresholds) and classification (by 1.1% in average area under the receiver operating characteristic curve). We publicly release our codes and pre-trained models at https://github.com/VITAGroup/chext.
Collapse
|