1
|
Motmaen I, Xie K, Schönbrunn L, Berens J, Grunert K, Plum AM, Raufeisen J, Ferreira A, Hermans A, Egger J, Hölzle F, Truhn D, Puladi B. Insights into Predicting Tooth Extraction from Panoramic Dental Images: Artificial Intelligence vs. Dentists. Clin Oral Investig 2024; 28:381. [PMID: 38886242 PMCID: PMC11182848 DOI: 10.1007/s00784-024-05781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE AI could help monitor at-risk teeth and reduce errors in indications for extractions.
Collapse
Affiliation(s)
- Ila Motmaen
- Department of Oral and Maxillofacial Surgery, University Hospital Knappschaftskrankenhaus Bochum, 44892, Bochum, Germany
| | - Kunpeng Xie
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Leon Schönbrunn
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Jeff Berens
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Kim Grunert
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Anna Maria Plum
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Johannes Raufeisen
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - André Ferreira
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Centre Algoritmi / LASI, University of Minho, 4710-057, Braga, Portugal
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Alexander Hermans
- Visual Computing Institute, Computer Science and Natural Sciences, RWTH Aachen University, 52074, Aachen, Germany
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, 45147, Essen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
2
|
Alam MS, Wang D, Sowmya A. AMFP-net: Adaptive multi-scale feature pyramid network for diagnosis of pneumoconiosis from chest X-ray images. Artif Intell Med 2024; 154:102917. [PMID: 38917599 DOI: 10.1016/j.artmed.2024.102917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 05/02/2024] [Accepted: 06/17/2024] [Indexed: 06/27/2024]
Abstract
Early detection of pneumoconiosis by routine health screening of workers in the mining industry is critical for preventing the progression of this incurable disease. Automated pneumoconiosis classification in chest X-ray images is challenging due to the low contrast of opacities, inter-class similarity, intra-class variation and the existence of artifacts. Compared to traditional methods, convolutional neural networks have shown significant improvement in pneumoconiosis classification tasks, however, accurate classification remains challenging due to mainly the inability to focus on semantically meaningful lesion opacities. Most existing networks focus on high level abstract information and ignore low level detailed object information. Different from natural images where an object occupies large space, the classification of pneumoconiosis depends on the density of small opacities inside the lung. To address this issue, we propose a novel two-stage adaptive multi-scale feature pyramid network called AMFP-Net for the diagnosis of pneumoconiosis from chest X-rays. The proposed model consists of 1) an adaptive multi-scale context block to extract rich contextual and discriminative information and 2) a weighted feature fusion module to effectively combine low level detailed and high level global semantic information. This two-stage network first segments the lungs to focus more on relevant regions by excluding irrelevant parts of the image, and then utilises the segmented lungs to classify pneumoconiosis into different categories. Extensive experiments on public and private datasets demonstrate that the proposed approach can outperform state-of-the-art methods for both segmentation and classification.
Collapse
Affiliation(s)
- Md Shariful Alam
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | | | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
3
|
Mosquera C, Ferrer L, Milone DH, Luna D, Ferrante E. Class imbalance on medical image classification: towards better evaluation practices for discrimination and calibration performance. Eur Radiol 2024:10.1007/s00330-024-10834-0. [PMID: 38861161 DOI: 10.1007/s00330-024-10834-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/08/2024] [Accepted: 04/17/2024] [Indexed: 06/12/2024]
Abstract
PURPOSE This work aims to assess standard evaluation practices used by the research community for evaluating medical imaging classifiers, with a specific focus on the implications of class imbalance. The analysis is performed on chest X-rays as a case study and encompasses a comprehensive model performance definition, considering both discriminative capabilities and model calibration. MATERIALS AND METHODS We conduct a concise literature review to examine prevailing scientific practices used when evaluating X-ray classifiers. Then, we perform a systematic experiment on two major chest X-ray datasets to showcase a didactic example of the behavior of several performance metrics under different class ratios and highlight how widely adopted metrics can conceal performance in the minority class. RESULTS Our literature study confirms that: (1) even when dealing with highly imbalanced datasets, the community tends to use metrics that are dominated by the majority class; and (2) it is still uncommon to include calibration studies for chest X-ray classifiers, albeit its importance in the context of healthcare. Moreover, our systematic experiments confirm that current evaluation practices may not reflect model performance in real clinical scenarios and suggest complementary metrics to better reflect the performance of the system in such scenarios. CONCLUSION Our analysis underscores the need for enhanced evaluation practices, particularly in the context of class-imbalanced chest X-ray classifiers. We recommend the inclusion of complementary metrics such as the area under the precision-recall curve (AUC-PR), adjusted AUC-PR, and balanced Brier score, to offer a more accurate depiction of system performance in real clinical scenarios, considering metrics that reflect both, discrimination and calibration performance. CLINICAL RELEVANCE STATEMENT This study underscores the critical need for refined evaluation metrics in medical imaging classifiers, emphasizing that prevalent metrics may mask poor performance in minority classes, potentially impacting clinical diagnoses and healthcare outcomes. KEY POINTS Common scientific practices in papers dealing with X-ray computer-assisted diagnosis (CAD) systems may be misleading. We highlight limitations in reporting of evaluation metrics for X-ray CAD systems in highly imbalanced scenarios. We propose adopting alternative metrics based on experimental evaluation on large-scale datasets.
Collapse
Affiliation(s)
- Candelaria Mosquera
- Hospital Italiano de Buenos Aires, Buenos Aires, Argentina.
- Universidad Tecnológica Nacional, Buenos Aires, Argentina.
| | - Luciana Ferrer
- Instituto de Ciencias de la Computación, UBA-CONICET, Buenos Aires, Argentina
| | - Diego H Milone
- Institute for Signals, Systems, and Computational Intelligence, sinc(i) CONICET-UNL, Santa Fe, Argentina
| | - Daniel Luna
- Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Enzo Ferrante
- Institute for Signals, Systems, and Computational Intelligence, sinc(i) CONICET-UNL, Santa Fe, Argentina.
| |
Collapse
|
4
|
Shimada Y, Ojima T, Takaoka Y, Sugano A, Someya Y, Hirabayashi K, Homma T, Kitamura N, Akemoto Y, Tanabe K, Sato F, Yoshimura N, Tsuchiya T. Prediction of visceral pleural invasion of clinical stage I lung adenocarcinoma using thoracoscopic images and deep learning. Surg Today 2024; 54:540-550. [PMID: 37864054 DOI: 10.1007/s00595-023-02756-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/13/2023] [Indexed: 10/22/2023]
Abstract
PURPOSE To develop deep learning models using thoracoscopic images to identify visceral pleural invasion (VPI) in patients with clinical stage I lung adenocarcinoma, and to verify if these models can be applied clinically. METHODS Two deep learning models, one based on a convolutional neural network (CNN) and the other based on a vision transformer (ViT), were applied and trained via 463 images (VPI negative: 269 images, VPI positive: 194 images) captured from surgical videos of 81 patients. Model performances were validated via an independent test dataset containing 46 images (VPI negative: 28 images, VPI positive: 18 images) from 46 test patients. RESULTS The areas under the receiver operating characteristic curves of the CNN-based and ViT-based models were 0.77 and 0.84 (p = 0.304), respectively. The accuracy, sensitivity, specificity, and positive and negative predictive values were 73.91, 83.33, 67.86, 62.50, and 86.36% for the CNN-based model and 78.26, 77.78, 78.57, 70.00, and 84.62% for the ViT-based model, respectively. These models' diagnostic abilities were comparable to those of board-certified thoracic surgeons and tended to be superior to those of non-board-certified thoracic surgeons. CONCLUSION The deep learning model systems can be utilized in clinical applications via data expansion.
Collapse
Affiliation(s)
- Yoshifumi Shimada
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Toshihiro Ojima
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yutaka Takaoka
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Aki Sugano
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Clinical Research, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Yoshiaki Someya
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Kenichi Hirabayashi
- Department of Diagnostic Pathology, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Takahiro Homma
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoya Kitamura
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yushi Akemoto
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Keitaro Tanabe
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Fumitaka Sato
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoki Yoshimura
- Department of Cardiovascular Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Tomoshi Tsuchiya
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan.
| |
Collapse
|
5
|
Schalekamp S, van Leeuwen K, Calli E, Murphy K, Rutten M, Geurts B, Peters-Bax L, van Ginneken B, Prokop M. Performance of AI to exclude normal chest radiographs to reduce radiologists' workload. Eur Radiol 2024:10.1007/s00330-024-10794-5. [PMID: 38758252 DOI: 10.1007/s00330-024-10794-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/09/2024] [Accepted: 04/22/2024] [Indexed: 05/18/2024]
Abstract
INTRODUCTION This study investigates the performance of a commercially available artificial intelligence (AI) system to identify normal chest radiographs and its potential to reduce radiologist workload. METHODS Retrospective analysis included consecutive chest radiographs from two medical centers between Oct 1, 2016 and Oct 14, 2016. Exclusions comprised follow-up exams within the inclusion period, bedside radiographs, incomplete images, imported radiographs, and pediatric radiographs. Three chest radiologists categorized findings into normal, clinically irrelevant, clinically relevant, urgent, and critical. A commercial AI system processed all radiographs, scoring 10 chest abnormalities on a 0-100 confidence scale. AI system performance was evaluated using the area under the ROC curve (AUC), assessing the detection of normal radiographs. Sensitivity was calculated for the default and a conservative operating point. the detection of negative predictive value (NPV) for urgent and critical findings, as well as the potential workload reduction, was calculated. RESULTS A total of 2603 radiographs were acquired in 2141 unique patients. Post-exclusion, 1670 radiographs were analyzed. Categories included 479 normal, 332 clinically irrelevant, 339 clinically relevant, 501 urgent, and 19 critical findings. The AI system achieved an AUC of 0.92. Sensitivity for normal radiographs was 92% at default and 53% at the conservative operating point. At the conservative operating point, NPV was 98% for urgent and critical findings, and could result in a 15% workload reduction. CONCLUSION A commercially available AI system effectively identifies normal chest radiographs and holds the potential to lessen radiologists' workload by omitting half of the normal exams from reporting. CLINICAL RELEVANCE STATEMENT The AI system is able to detect half of all normal chest radiographs at a clinically acceptable operating point, thereby potentially reducing the workload for the radiologists by 15%. KEY POINTS The AI system reached an AUC of 0.92 for the detection of normal chest radiographs. Fifty-three percent of normal chest radiographs were identified with a NPV of 98% for urgent findings. AI can reduce the workload of chest radiography reporting by 15%.
Collapse
Affiliation(s)
| | | | - Erdi Calli
- Department of Imaging, Radboudumc, Nijmegen, The Netherlands
| | - Keelin Murphy
- Department of Imaging, Radboudumc, Nijmegen, The Netherlands
| | - Matthieu Rutten
- Department of Imaging, Radboudumc, Nijmegen, The Netherlands
- Department of Radiology, Jeroen Bosch Ziekenhuis, 's Hertogenbosch, The Netherlands
| | - Bram Geurts
- Department of Imaging, Radboudumc, Nijmegen, The Netherlands
| | | | | | - Mathias Prokop
- Department of Imaging, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Yuan Y, Liu L, Yang X, Liu L, Huang Q. Multi-scale Lesion Feature Fusion and Location-Aware for Chest Multi-disease Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01133-7. [PMID: 38760643 DOI: 10.1007/s10278-024-01133-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/19/2024]
Abstract
Accurately identifying and locating lesions in chest X-rays has the potential to significantly enhance diagnostic efficiency, quality, and interpretability. However, current methods primarily focus on detecting of specific diseases in chest X-rays, disregarding the presence of multiple diseases in a single chest X-ray scan. Moreover, the diversity in lesion locations and attributes introduces complexity in accurately discerning specific traits for each lesion, leading to diminished accuracy when detecting multiple diseases. To address these issues, we propose a novel detection framework that enhances multi-scale lesion feature extraction and fusion, improving lesion position perception and subsequently boosting chest multi-disease detection performance. Initially, we construct a multi-scale lesion feature extraction network to tackle the uniqueness of various lesion features and locations, strengthening the global semantic correlation between lesion features and their positions. Following this, we introduce an instance-aware semantic enhancement network that dynamically amalgamates instance-specific features with high-level semantic representations across various scales. This adaptive integration effectively mitigates the loss of detailed information within lesion regions. Additionally, we perform lesion region feature mapping using candidate boxes to preserve crucial positional information, enhancing the accuracy of chest disease detection across multiple scales. Experimental results on the VinDr-CXR dataset reveal a 6% increment in mean average precision (mAP) and an 8.4% improvement in mean recall (mR) when compared to state-of-the-art baselines. This demonstrates the effectiveness of the model in accurately detecting multiple chest diseases by capturing specific features and location information.
Collapse
Affiliation(s)
- Yubo Yuan
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Lijun Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China.
- Key Laboratory of Application in Computer Technology in Yunnan Province, Kunming, 650500, China.
| | - Xiaobing Yang
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| | - Li Liu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
| | - Qingsong Huang
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
- Department of State-Owned Assets and Laboratory Management, Kunming University of Science and Technology, Kunming, 650500, China
| |
Collapse
|
7
|
Wang L, Wang Q, Wang X, Ma Y, Zhang L, Liu M. Triplet-constrained deep hashing for chest X-ray image retrieval in COVID-19 assessment. Neural Netw 2024; 173:106182. [PMID: 38387203 DOI: 10.1016/j.neunet.2024.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/15/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024]
Abstract
Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest radiology image search due to the high efficiency of retrieval using hash codes. However, they often overlook the complex triple associations between images; that is, images belonging to the same category tend to share similar characteristics and vice versa. To this end, we develop a triplet-constrained deep hashing (TCDH) framework for chest radiology image retrieval to facilitate automated analysis of COVID-19. The TCDH consists of two phases, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-ray images demonstrate the superiority of the proposed method over several state-of-the-art approaches in automated image search. The code is now available online.
Collapse
Affiliation(s)
- Linmin Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Qianqian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Xiaochuan Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Yunling Ma
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Limei Zhang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, Shandong, 250101, China.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
8
|
Veras Magalhães G, L. de S. Santos R, H. S. Vogado L, Cardoso de Paiva A, de Alcântara dos Santos Neto P. XRaySwinGen: Automatic medical reporting for X-ray exams with multimodal model. Heliyon 2024; 10:e27516. [PMID: 38560155 PMCID: PMC10979158 DOI: 10.1016/j.heliyon.2024.e27516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 02/29/2024] [Accepted: 03/01/2024] [Indexed: 04/04/2024] Open
Abstract
The importance of radiology in modern medicine is acknowledged for its non-invasive diagnostic capabilities, yet the manual formulation of unstructured medical reports poses time constraints and error risks. This study addresses the common limitation of Artificial Intelligence applications in medical image captioning, which typically focus on classification problems, lacking detailed information about the patient's condition. Despite advancements in AI-generated medical reports that incorporate descriptive details from X-ray images, which are essential for comprehensive reports, the challenge persists. The proposed solution involves a multimodal model utilizing Computer Vision for image representation and Natural Language Processing for textual report generation. A notable contribution is the innovative use of the Swin Transformer as the image encoder, enabling hierarchical mapping and enhanced model perception without a surge in parameters or computational costs. The model incorporates GPT-2 as the textual decoder, integrating cross-attention layers and bilingual training with datasets in Portuguese PT-BR and English. Promising results are noted in the proposed database with ROUGE-L 0.748, METEOR 0.741, and NIH CHEST X-ray with ROUGE-L 0.404 and METEOR 0.393.
Collapse
Affiliation(s)
| | | | - Luis H. S. Vogado
- Departamento de Computação, Universidade Federal do Piauí, Teresina, Brazil
| | | | | |
Collapse
|
9
|
Kantipudi K, Gu J, Bui V, Yu H, Jaeger S, Yaniv Z. Automated Pulmonary Tuberculosis Severity Assessment on Chest X-rays. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01052-7. [PMID: 38587769 DOI: 10.1007/s10278-024-01052-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 04/09/2024]
Abstract
According to the 2022 World Health Organization's Global Tuberculosis (TB) report, an estimated 10.6 million people fell ill with TB, and 1.6 million died from the disease in 2021. In addition, 2021 saw a reversal of a decades-long trend of declining TB infections and deaths, with an estimated increase of 4.5% in the number of people who fell ill with TB compared to 2020, and an estimated yearly increase of 450,000 cases of drug resistant TB. Estimating the severity of pulmonary TB using frontal chest X-rays (CXR) can enable better resource allocation in resource constrained settings and monitoring of treatment response, enabling prompt treatment modifications if disease severity does not decrease over time. The Timika score is a clinically used TB severity score based on a CXR reading. This work proposes and evaluates three deep learning-based approaches for predicting the Timika score with varying levels of explainability. The first approach uses two deep learning-based models, one to explicitly detect lesion regions using YOLOV5n and another to predict the presence of cavitation using DenseNet121, which are then utilized in score calculation. The second approach uses a DenseNet121-based regression model to directly predict the affected lung percentage and another to predict cavitation presence using a DenseNet121-based classification model. Finally, the third approach directly predicts the Timika score using a DenseNet121-based regression model. The best performance is achieved by the second approach with a mean absolute error of 13-14% and a Pearson correlation of 0.7-0.84 using three held-out datasets for evaluating generalization.
Collapse
Affiliation(s)
- Karthik Kantipudi
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA.
| | - Jingwen Gu
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA
| | - Vy Bui
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Hang Yu
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Stefan Jaeger
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Ziv Yaniv
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA.
| |
Collapse
|
10
|
Russo C, Bria A, Marrocco C. GravityNet for end-to-end small lesion detection. Artif Intell Med 2024; 150:102842. [PMID: 38553147 DOI: 10.1016/j.artmed.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/01/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.
Collapse
Affiliation(s)
- Ciro Russo
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| |
Collapse
|
11
|
Topff L, Steltenpool S, Ranschaert ER, Ramanauskas N, Menezes R, Visser JJ, Beets-Tan RGH, Hartkamp NS. Artificial intelligence-assisted double reading of chest radiographs to detect clinically relevant missed findings: a two-centre evaluation. Eur Radiol 2024:10.1007/s00330-024-10676-w. [PMID: 38466390 DOI: 10.1007/s00330-024-10676-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 01/21/2024] [Accepted: 02/01/2024] [Indexed: 03/13/2024]
Abstract
OBJECTIVES To evaluate an artificial intelligence (AI)-assisted double reading system for detecting clinically relevant missed findings on routinely reported chest radiographs. METHODS A retrospective study was performed in two institutions, a secondary care hospital and tertiary referral oncology centre. Commercially available AI software performed a comparative analysis of chest radiographs and radiologists' authorised reports using a deep learning and natural language processing algorithm, respectively. The AI-detected discrepant findings between images and reports were assessed for clinical relevance by an external radiologist, as part of the commercial service provided by the AI vendor. The selected missed findings were subsequently returned to the institution's radiologist for final review. RESULTS In total, 25,104 chest radiographs of 21,039 patients (mean age 61.1 years ± 16.2 [SD]; 10,436 men) were included. The AI software detected discrepancies between imaging and reports in 21.1% (5289 of 25,104). After review by the external radiologist, 0.9% (47 of 5289) of cases were deemed to contain clinically relevant missed findings. The institution's radiologists confirmed 35 of 47 missed findings (74.5%) as clinically relevant (0.1% of all cases). Missed findings consisted of lung nodules (71.4%, 25 of 35), pneumothoraces (17.1%, 6 of 35) and consolidations (11.4%, 4 of 35). CONCLUSION The AI-assisted double reading system was able to identify missed findings on chest radiographs after report authorisation. The approach required an external radiologist to review the AI-detected discrepancies. The number of clinically relevant missed findings by radiologists was very low. CLINICAL RELEVANCE STATEMENT The AI-assisted double reader workflow was shown to detect diagnostic errors and could be applied as a quality assurance tool. Although clinically relevant missed findings were rare, there is potential impact given the common use of chest radiography. KEY POINTS • A commercially available double reading system supported by artificial intelligence was evaluated to detect reporting errors in chest radiographs (n=25,104) from two institutions. • Clinically relevant missed findings were found in 0.1% of chest radiographs and consisted of unreported lung nodules, pneumothoraces and consolidations. • Applying AI software as a secondary reader after report authorisation can assist in reducing diagnostic errors without interrupting the radiologist's reading workflow. However, the number of AI-detected discrepancies was considerable and required review by a radiologist to assess their relevance.
Collapse
Affiliation(s)
- Laurens Topff
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands.
- GROW School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.
| | - Sanne Steltenpool
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
- Department of Radiology, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
| | - Erik R Ranschaert
- Department of Radiology, St. Nikolaus Hospital, Eupen, Belgium
- Ghent University, Ghent, Belgium
| | - Naglis Ramanauskas
- Oxipit UAB, Vilnius, Lithuania
- Department of Radiology, Nuclear Medicine and Medical Physics, Institute of Biomedical Sciences, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
| | - Renee Menezes
- Biostatistics Centre, Department of Psychosocial Research and Epidemiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, The Netherlands
- GROW School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Nolan S Hartkamp
- Department of Radiology, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
| |
Collapse
|
12
|
Yoshida A, Kai C, Futamura H, Oochi K, Kondo S, Sato I, Kasai S. Spirometry test values can be estimated from a single chest radiograph. Front Med (Lausanne) 2024; 11:1335958. [PMID: 38510449 PMCID: PMC10953498 DOI: 10.3389/fmed.2024.1335958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/23/2024] [Indexed: 03/22/2024] Open
Abstract
Introduction Physical measurements of expiratory flow volume and speed can be obtained using spirometry. These measurements have been used for the diagnosis and risk assessment of chronic obstructive pulmonary disease and play a crucial role in delivering early care. However, spirometry is not performed frequently in routine clinical practice, thereby hindering the early detection of pulmonary function impairment. Chest radiographs (CXRs), though acquired frequently, are not used to measure pulmonary functional information. This study aimed to evaluate whether spirometry parameters can be estimated accurately from single frontal CXR without image findings using deep learning. Methods Forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and FEV1/FVC as spirometry measurements as well as the corresponding chest radiographs of 11,837 participants were used in this study. The data were randomly allocated to the training, validation, and evaluation datasets at an 8:1:1 ratio. A deep learning network was pretrained using ImageNet. The input and output information were CXRs and spirometry test values, respectively. The training and evaluation of the deep learning network were performed separately for each parameter. The mean absolute error rate (MAPE) and Pearson's correlation coefficient (r) were used as the evaluation indices. Results The MAPEs between the spirometry measurements and AI estimates for FVC, FEV1 and FEV1/FVC were 7.59% (r = 0.910), 9.06% (r = 0.879) and 5.21% (r = 0.522), respectively. A strong positive correlation was observed between the measured and predicted indices of FVC and FEV1. The average accuracy of >90% was obtained in each estimation of spirometry indices. Bland-Altman analysis revealed good agreement between the estimated and measured values for FVC and FEV1. Discussion Frontal CXRs contain information related to pulmonary function, and AI estimation performed using frontal CXRs without image findings could accurately estimate spirometry values. The network proposed for estimating pulmonary function in this study could serve as a recommendation for performing spirometry or as an alternative method, suggesting its utility.
Collapse
Affiliation(s)
- Akifumi Yoshida
- Department of Radiological Technology, Faculty of Medical Technology, Niigata University of Health and Welfare, Niigata, Japan
| | - Chiharu Kai
- Department of Radiological Technology, Faculty of Medical Technology, Niigata University of Health and Welfare, Niigata, Japan
- Major in Health and Welfare, Graduate School of Niigata University of Health and Welfare, Niigata, Japan
| | | | | | - Satoshi Kondo
- Graduate School of Engineering, Muroran Institute of Technology, Muroran, Japan
| | - Ikumi Sato
- Major in Health and Welfare, Graduate School of Niigata University of Health and Welfare, Niigata, Japan
- Department of Nursing, Faculty of Nursing, Niigata University of Health and Welfare, Niigata, Japan
| | - Satoshi Kasai
- Department of Radiological Technology, Faculty of Medical Technology, Niigata University of Health and Welfare, Niigata, Japan
| |
Collapse
|
13
|
Takamatsu A, Ueno M, Yoshida K, Kobayashi T, Kobayashi S, Gabata T. Performance of artificial intelligence-based software for the automatic detection of lung lesions on chest radiographs of patients with suspected lung cancer. Jpn J Radiol 2024; 42:291-299. [PMID: 38032419 PMCID: PMC10899395 DOI: 10.1007/s11604-023-01503-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023]
Abstract
PURPOSE This study aimed to evaluate the performance of the commercially available artificial intelligence-based software CXR-AID for the automatic detection of pulmonary nodules on the chest radiographs of patients suspected of having lung cancer. MATERIALS AND METHODS This retrospective study included 399 patients with clinically suspected lung cancer who underwent CT and chest radiography within 1 month between June 2020 and May 2022. The candidate areas on chest radiographs identified by CXR-AID were categorized into target (properly detected areas) and non-target (improperly detected areas) areas. The non-target areas were further divided into non-target normal areas (false positives for normal structures) and non-target abnormal areas. The visibility score, characteristics and location of the nodules, presence of overlapping structures, and background lung score and presence of pulmonary disease were manually evaluated and compared between the nodules detected or undetected by CXR-AID. The probability indices calculated by CXR-AID were compared between the target and non-target areas. RESULTS Among the 450 nodules detected in 399 patients, 331 nodules detected in 313 patients were visible on chest radiographs during manual evaluation. CXR-AID detected 264 of these 331 nodules with a sensitivity of 0.80. The detection sensitivity increased significantly with the visibility score. No significant correlation was observed between the background lung score and sensitivity. The non-target area per image was 0.85, and the probability index of the non-target area was lower than that of the target area. The non-target normal area per image was 0.24. Larger and more solid nodules exhibited higher sensitivities, while nodules with overlapping structures demonstrated lower detection sensitivities. CONCLUSION The nodule detection sensitivity of CXR-AID on chest radiographs was 0.80, and the non-target and non-target normal areas per image were 0.85 and 0.24, respectively. Larger, solid nodules without overlapping structures were detected more readily by CXR-AID.
Collapse
Affiliation(s)
- Atsushi Takamatsu
- Department of Radiology, Kanazawa University Graduate School of Medical Sciences, 13-1 Takaramachi, Kanazawa, Ishikawa, 920-8641, Japan
| | - Midori Ueno
- Department of Radiology, Kanazawa University Graduate School of Medical Sciences, 13-1 Takaramachi, Kanazawa, Ishikawa, 920-8641, Japan
| | - Kotaro Yoshida
- Department of Radiology, Kanazawa University Graduate School of Medical Sciences, 13-1 Takaramachi, Kanazawa, Ishikawa, 920-8641, Japan.
| | - Takeshi Kobayashi
- Department of Diagnostic and Interventional Radiology, Ishikawa Prefectural Central Hospital, Kanazawa, Ishikawa, 920-8530, Japan
| | - Satoshi Kobayashi
- Department of Radiology, Kanazawa University Graduate School of Medical Sciences, 13-1 Takaramachi, Kanazawa, Ishikawa, 920-8641, Japan
| | - Toshifumi Gabata
- Department of Radiology, Kanazawa University Graduate School of Medical Sciences, 13-1 Takaramachi, Kanazawa, Ishikawa, 920-8641, Japan
| |
Collapse
|
14
|
Zhang K, Liang W, Cao P, Liu X, Yang J, Zaiane O. Label correlation guided discriminative label feature learning for multi-label chest image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108032. [PMID: 38244339 DOI: 10.1016/j.cmpb.2024.108032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/02/2024] [Accepted: 01/12/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND AND OBJECTIVE Multi-label Chest X-ray (CXR) images often contain rich label relationship information, which is beneficial to improve classification performance. However, because of the intricate relationships among labels, most existing works fail to effectively learn and make full use of the label correlations, resulting in limited classification performance. In this study, we propose a multi-label learning framework that learns and leverages the label correlations to improve multi-label CXR image classification. METHODS In this paper, we capture the global label correlations through the self-attention mechanism. Meanwhile, to better utilize label correlations for guiding feature learning, we decompose the image-level features into label-level features. Furthermore, we enhance label-level feature learning in an end-to-end manner by a consistency constraint between global and local label correlations, and a label correlation guided multi-label supervised contrastive loss. RESULTS To demonstrate the superior performance of our proposed approach, we conduct three times 5-fold cross-validation experiments on the CheXpert dataset. Our approach obtains an average F1 score of 44.6% and an AUC of 76.5%, achieving a 7.7% and 1.3% improvement compared to the state-of-the-art results. CONCLUSION More accurate label correlations and full utilization of the learned label correlations help learn more discriminative label-level features. Experimental results demonstrate that our approach achieves exceptionally competitive performance compared to the state-of-the-art algorithms.
Collapse
Affiliation(s)
- Kai Zhang
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Wei Liang
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peng Cao
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China.
| | - Xiaoli Liu
- DAMO Academy, Alibaba Group, Hangzhou, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Osmar Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
15
|
Yu SN, Chiu MC, Chang YP, Liang CY, Chen W. Improving Computer-Aided Thoracic Disease Diagnosis through Comparative Analysis Using Chest X-ray Images Taken at Different Times. SENSORS (BASEL, SWITZERLAND) 2024; 24:1478. [PMID: 38475013 DOI: 10.3390/s24051478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 02/06/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
Medical professionals in thoracic medicine routinely analyze chest X-ray images, often comparing pairs of images taken at different times to detect lesions or anomalies in patients. This research aims to design a computer-aided diagnosis system that enhances the efficiency of thoracic physicians in comparing and diagnosing X-ray images, ultimately reducing misjudgments. The proposed system encompasses four key components: segmentation, alignment, comparison, and classification of lung X-ray images. Utilizing a public NIH Chest X-ray14 dataset and a local dataset gathered by the Chiayi Christian Hospital in Taiwan, the efficacy of both the traditional methods and deep-learning methods were compared. Experimental results indicate that, in both the segmentation and alignment stages, the deep-learning method outperforms the traditional method, achieving higher average IoU, detection rates, and significantly reduced processing time. In the comparison stage, we designed nonlinear transfer functions to highlight the differences between pre- and post-images through heat maps. In the classification stage, single-input and dual-input network architectures were proposed. The inclusion of difference information in single-input networks enhances AUC by approximately 1%, and dual-input networks achieve a 1.2-1.4% AUC increase, underscoring the importance of difference images in lung disease identification and classification based on chest X-ray images. While the proposed system is still in its early stages and far from clinical application, the results demonstrate potential steps forward in the development of a comprehensive computer-aided diagnostic system for comparative analysis of chest X-ray images.
Collapse
Affiliation(s)
- Sung-Nien Yu
- Department of Electrical Engineering, National Chung Cheng University, Chiayi County 621301, Taiwan
- Center for Innovative Research on Aging Society (CIRAS), National Chung Cheng University, Chiayi County 621301, Taiwan
| | - Meng-Chin Chiu
- Department of Electrical Engineering, National Chung Cheng University, Chiayi County 621301, Taiwan
| | - Yu Ping Chang
- Department of Electrical Engineering, National Chung Cheng University, Chiayi County 621301, Taiwan
| | - Chi-Yen Liang
- Division of Pulmonary and Critical Care Medicine, Chiayi Christian Hospital, Chiayi County 600566, Taiwan
| | - Wei Chen
- Division of Pulmonary and Critical Care Medicine, Chiayi Christian Hospital, Chiayi County 600566, Taiwan
| |
Collapse
|
16
|
Bernardi S, Vallati M, Gatta R. Artificial Intelligence-Based Management of Adult Chronic Myeloid Leukemia: Where Are We and Where Are We Going? Cancers (Basel) 2024; 16:848. [PMID: 38473210 DOI: 10.3390/cancers16050848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/08/2024] [Accepted: 02/15/2024] [Indexed: 03/14/2024] Open
Abstract
Artificial intelligence (AI) is emerging as a discipline capable of providing significant added value in Medicine, in particular in radiomic, imaging analysis, big dataset analysis, and also for generating virtual cohort of patients. However, in coping with chronic myeloid leukemia (CML), considered an easily managed malignancy after the introduction of TKIs which strongly improved the life expectancy of patients, AI is still in its infancy. Noteworthy, the findings of initial trials are intriguing and encouraging, both in terms of performance and adaptability to different contexts in which AI can be applied. Indeed, the improvement of diagnosis and prognosis by leveraging biochemical, biomolecular, imaging, and clinical data can be crucial for the implementation of the personalized medicine paradigm or the streamlining of procedures and services. In this review, we present the state of the art of AI applications in the field of CML, describing the techniques and objectives, and with a general focus that goes beyond Machine Learning (ML), but instead embraces the wider AI field. The present scooping review spans on publications reported in Pubmed from 2003 to 2023, and resulting by searching "chronic myeloid leukemia" and "artificial intelligence". The time frame reflects the real literature production and was not restricted. We also take the opportunity for discussing the main pitfalls and key points to which AI must respond, especially considering the critical role of the 'human' factor, which remains key in this domain.
Collapse
Affiliation(s)
- Simona Bernardi
- Department of Clinical and Experimental Sciences, University of Brescia, 25123 Brescia, Italy
- CREA-Centro di Ricerca Emato-Oncologica AIL, ASST Spedali Civili of Brescia, 25123 Brescia, Italy
| | - Mauro Vallati
- School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK
| | - Roberto Gatta
- Department of Clinical and Experimental Sciences, University of Brescia, 25123 Brescia, Italy
| |
Collapse
|
17
|
Fan W, Yang Y, Qi J, Zhang Q, Liao C, Wen L, Wang S, Wang G, Xia Y, Wu Q, Fan X, Chen X, He M, Xiao J, Yang L, Liu Y, Chen J, Wang B, Zhang L, Yang L, Gan H, Zhang S, Liu G, Ge X, Cai Y, Zhao G, Zhang X, Xie M, Xu H, Zhang Y, Chen J, Li J, Han S, Mu K, Xiao S, Xiong T, Nian Y, Zhang D. A deep-learning-based framework for identifying and localizing multiple abnormalities and assessing cardiomegaly in chest X-ray. Nat Commun 2024; 15:1347. [PMID: 38355644 PMCID: PMC10867134 DOI: 10.1038/s41467-024-45599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate identification and localization of multiple abnormalities are crucial steps in the interpretation of chest X-rays (CXRs); however, the lack of a large CXR dataset with bounding boxes severely constrains accurate localization research based on deep learning. We created a large CXR dataset named CXR-AL14, containing 165,988 CXRs and 253,844 bounding boxes. On the basis of this dataset, a deep-learning-based framework was developed to identify and localize 14 common abnormalities and calculate the cardiothoracic ratio (CTR) simultaneously. The mean average precision values obtained by the model for 14 abnormalities reached 0.572-0.631 with an intersection-over-union threshold of 0.5, and the intraclass correlation coefficient of the CTR algorithm exceeded 0.95 on the held-out, multicentre and prospective test datasets. This framework shows an excellent performance, good generalization ability and strong clinical applicability, which is superior to senior radiologists and suitable for routine clinical settings.
Collapse
Affiliation(s)
- Weijie Fan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Yang
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Jing Qi
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Qichuan Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Cuiwei Liao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Li Wen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guangxian Wang
- Department of Radiology, People's Hospital of Banan, Chongqing Medical University, Chongqing, 401320, P. R. China
| | - Yu Xia
- Department of Radiology, Xishui hospital of Traditional Chinese Medicine, Zunyi of Guizhou province, 564600, P. R. China
| | - Qihua Wu
- Department of Radiology, People's Hospital of Nanchuan, Chongqing, 408400, P. R. China
| | - Xiaotao Fan
- Department of Radiology, Fengdu People's Hospital, Chongqing, 408200, P. R. China
| | - Xingcai Chen
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Mi He
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - JingJing Xiao
- Department of Medical Engineering, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liu Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yun Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jia Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Bing Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Lei Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liuqing Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Hui Gan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shushu Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guofang Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xiaodong Ge
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yuanqing Cai
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Gang Zhao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Mingxun Xie
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Huilin Xu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jiao Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jun Li
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Han
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Ke Mu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shilin Xiao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Tingwei Xiong
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yongjian Nian
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China.
| | - Dong Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China.
| |
Collapse
|
18
|
Shah IA, Mishra S. Artificial intelligence in advancing occupational health and safety: an encapsulation of developments. J Occup Health 2024; 66:uiad017. [PMID: 38334203 PMCID: PMC10878366 DOI: 10.1093/joccuh/uiad017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 11/29/2023] [Accepted: 12/06/2023] [Indexed: 02/10/2024] Open
Abstract
OBJECTIVES In an era characterized by dynamic technological advancements, the well-being of the workforce remains a cornerstone of progress and sustainability. The evolving industrial landscape in the modern world has had a considerable influence on occupational health and safety (OHS). Ensuring the well-being of workers and creating safe working environments are not only ethical imperatives but also integral to maintaining operational efficiency and productivity. We aim to review the advancements that have taken place with a potential to reshape workplace safety with integration of artificial intelligence (AI)-driven new technologies to prevent occupational diseases and promote safety solutions. METHODS The published literature was identified using scientific databases of Embase, PubMed, and Google scholar including a lower time bound of 1974 to capture chronological advances in occupational disease detection and technological solutions employed in industrial set-ups. RESULTS AI-driven technologies are revolutionizing how organizations approach health and safety, offering predictive insights, real-time monitoring, and risk mitigation strategies that not only minimize accidents and hazards but also pave the way for a more proactive and responsive approach to safeguarding the workforce. CONCLUSION As industries embrace the transformative potential of AI, a new frontier of possibilities emerges for enhancing workplace safety. This synergy between OHS and AI marks a pivotal moment in the quest for safer, healthier, and more sustainable workplaces.
Collapse
Affiliation(s)
- Immad A Shah
- Division of Health Sciences, ICMR-National Institute of Occupational Health, Ahmedabad, Gujarat, India
| | - SukhDev Mishra
- Department of Biostatistics, Division of Health Sciences, ICMR-National Institute of Occupational Health, Ahmedabad, Gujarat, India
| |
Collapse
|
19
|
Zhang M, Wang Y, Lv M, Sang L, Wang X, Yu Z, Yang Z, Wang Z, Sang L. Trends and Hotspots in Global Radiomics Research: A Bibliometric Analysis. Technol Cancer Res Treat 2024; 23:15330338241235769. [PMID: 38465611 DOI: 10.1177/15330338241235769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024] Open
Abstract
Objectives: The purpose of this research is to summarize the structure of radiomics-based knowledge and to explore potential trends and priorities by using bibliometric analysis. Methods: Select radiomics-related publications from 2012 to October 2022 from the Science Core Collection Web site. Use VOSviewer (version 1.6.18), CiteSpace (version 6.1.3), Tableau (version 2022), Microsoft Excel and Rstudio's free online platforms (http://bibliometric.com) for co-writing, co-citing, and co-occurrence analysis of countries, institutions, authors, references, and keywords in the field. The visual analysis is also carried out on it. Results: The study included 6428 articles. Since 2012, there has been an increase in research papers based on radiomics. Judging by publications, China has made the largest contribution in this area. We identify the most productive institutions and authors as Fudan University and Tianjie. The top three magazines with the most publications are《FRONTIERS IN ONCOLOGY》, 《EUROPEAN RADIOLOGY》, and 《CANCERS》. According to the results of reference and keyword analysis, "deep learning, nomogram, ultrasound, f-18-fdg, machine learning, covid-19, radiogenomics" has been determined as the main research direction in the future. Conclusion: Radiomics is in a phase of vigorous development with broad prospects. Cross-border cooperation between countries and institutions should be strengthened in the future. It can be predicted that the development of deep learning-based models and multimodal fusion models will be the focus of future research. Advances in knowledge: This study explores the current state of research and hot spots in the field of radiomics from multiple perspectives, comprehensively, and objectively reflecting the evolving trends in imaging-related research and providing a reference for future research.
Collapse
Affiliation(s)
- Minghui Zhang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Yan Wang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Mutian Lv
- Department of Nuclear Medicine, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Li Sang
- Department of Acupuncture and Massage, Shouguang Hospital of Traditional Chinese Medicine, Weifang, P. R. China
| | - Xuemei Wang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Zijun Yu
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Ziyi Yang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Zhongqing Wang
- Department of Information Center, The First Hospital of China Medical University, Shenyang, P. R. China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, P. R. China
| |
Collapse
|
20
|
Jeong Y, Jeong C, Sung KY, Moon G, Lim J. Development of AI-Based Diagnostic Algorithm for Nasal Bone Fracture Using Deep Learning. J Craniofac Surg 2024; 35:29-32. [PMID: 38294297 DOI: 10.1097/scs.0000000000009856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/08/2023] [Indexed: 02/01/2024] Open
Abstract
Facial bone fractures are relatively common, with the nasal bone the most frequently fractured facial bone. Computed tomography is the gold standard for diagnosing such fractures. Most nasal bone fractures can be treated using a closed reduction. However, delayed diagnosis may cause nasal deformity or other complications that are difficult and expensive to treat. In this study, the authors developed an algorithm for diagnosing nasal fractures by learning computed tomography images of facial bones with artificial intelligence through deep learning. A significant concordance with human doctors' reading results of 100% sensitivity and 77% specificity was achieved. Herein, the authors report the results of a pilot study on the first stage of developing an algorithm for analyzing fractures in the facial bone.
Collapse
Affiliation(s)
- Yeonjin Jeong
- Department of Plastic and Reconstructive Surgery, National Medical Center, Seoul, Korea
| | - Chanho Jeong
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Kun-Yong Sung
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Gwiseong Moon
- Department of Computer Science and Engineering, Kangwon National University, Kangwon-do, Korea
| | - Jinsoo Lim
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, St. Vincent's Hospital, Gyeonggi-do, Korea
| |
Collapse
|
21
|
Azarfar G, Ko SB, Adams SJ, Babyn PS. Deep learning-based age estimation from chest CT scans. Int J Comput Assist Radiol Surg 2024; 19:119-127. [PMID: 37418109 DOI: 10.1007/s11548-023-02989-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/14/2023] [Indexed: 07/08/2023]
Abstract
PURPOSE Medical imaging can be used to estimate a patient's biological age, which may provide complementary information to clinicians compared to chronological age. In this study, we aimed to develop a method to estimate a patient's age based on their chest CT scan. Additionally, we investigated whether chest CT estimated age is a more accurate predictor of lung cancer risk compared to chronological age. METHODS To develop our age prediction model, we utilized composite CT images and Inception-ResNet-v2. The model was trained, validated, and tested on 13,824 chest CT scans from the National Lung Screening Trial, with 91% for training, 5% for validation, and 4% for testing. Additionally, we independently tested the model on 1849 CT scans collected locally. To assess chest CT estimated age as a risk factor for lung cancer, we computed the relative lung cancer risk between two groups. Group 1 consisted of individuals assigned a CT age older than their chronological age, while Group 2 comprised those assigned a CT age younger than their chronological age. RESULTS Our analysis revealed a mean absolute error of 1.84 years and a Pearson's correlation coefficient of 0.97 for our local data when comparing chronological age with the estimated CT age. The model showed the most activation in the area associated with the lungs during age estimation. The relative risk for lung cancer was 1.82 (95% confidence interval, 1.65-2.02) for individuals assigned a CT age older than their chronological age compared to those assigned a CT age younger than their chronological age. CONCLUSION Findings suggest that chest CT age captures some aspects of biological aging and may be a more accurate predictor of lung cancer risk than chronological age. Future studies with larger and more diverse patients are required for the generalization of the interpretations.
Collapse
Affiliation(s)
- Ghazal Azarfar
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada.
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Seok-Bum Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada
| | - Paul S Babyn
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
22
|
Han SJ, Yi PH. Is the Juice Worth the Squeeze? Learning Curve of a Chest Radiograph Semantic Labeling Deep Learning Model. J Thorac Imaging 2024; 39:W11-W12. [PMID: 37889555 DOI: 10.1097/rti.0000000000000755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Affiliation(s)
- Samuel J Han
- Georgetown University School of Medicine, Washington, DC
- University of Maryland Medical Intelligent Imaging (UM2ii) Center
| | - Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD
| |
Collapse
|
23
|
Rocha J, Pereira SC, Pedrosa J, Campilho A, Mendonça AM. STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images. Artif Intell Med 2024; 147:102737. [PMID: 38184361 DOI: 10.1016/j.artmed.2023.102737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 11/16/2023] [Accepted: 11/28/2023] [Indexed: 01/08/2024]
Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% - a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.
Collapse
Affiliation(s)
- Joana Rocha
- INESC TEC and Faculty of Engineering, University of Porto, R. Dr. Roberto Frias s/n, 4200-465, Porto, Portugal.
| | - Sofia Cardoso Pereira
- INESC TEC and Faculty of Engineering, University of Porto, R. Dr. Roberto Frias s/n, 4200-465, Porto, Portugal
| | - João Pedrosa
- INESC TEC and Faculty of Engineering, University of Porto, R. Dr. Roberto Frias s/n, 4200-465, Porto, Portugal
| | - Aurélio Campilho
- INESC TEC and Faculty of Engineering, University of Porto, R. Dr. Roberto Frias s/n, 4200-465, Porto, Portugal
| | - Ana Maria Mendonça
- INESC TEC and Faculty of Engineering, University of Porto, R. Dr. Roberto Frias s/n, 4200-465, Porto, Portugal
| |
Collapse
|
24
|
Siracusano G, La Corte A, Nucera AG, Gaeta M, Chiappini M, Finocchio G. Effective processing pipeline PACE 2.0 for enhancing chest x-ray contrast and diagnostic interpretability. Sci Rep 2023; 13:22471. [PMID: 38110512 PMCID: PMC10728198 DOI: 10.1038/s41598-023-49534-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 12/09/2023] [Indexed: 12/20/2023] Open
Abstract
Preprocessing is an essential task for the correct analysis of digital medical images. In particular, X-ray imaging might contain artifacts, low contrast, diffractions or intensity inhomogeneities. Recently, we have developed a procedure named PACE that is able to improve chest X-ray (CXR) images including the enforcement of clinical evaluation of pneumonia originated by COVID-19. At the clinical benchmark state of this tool, there have been found some peculiar conditions causing a reduction of details over large bright regions (as in ground-glass opacities and in pleural effusions in bedridden patients) and resulting in oversaturated areas. Here, we have significantly improved the overall performance of the original approach including the results in those specific cases by developing PACE2.0. It combines 2D image decomposition, non-local means denoising, gamma correction, and recursive algorithms to improve image quality. The tool has been evaluated using three metrics: contrast improvement index, information entropy, and effective measure of enhancement, resulting in an average increase of 35% in CII, 7.5% in ENT, 95.6% in EME and 13% in BRISQUE against original radiographies. Additionally, the enhanced images were fed to a pre-trained DenseNet-121 model for transfer learning, resulting in an increase in classification accuracy from 80 to 94% and recall from 89 to 97%, respectively. These improvements led to a potential enhancement of the interpretability of lesion detection in CXRs. PACE2.0 has the potential to become a valuable tool for clinical decision support and could help healthcare professionals detect pneumonia more accurately.
Collapse
Affiliation(s)
- Giulio Siracusano
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy.
| | - Aurelio La Corte
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy
| | - Annamaria Giuseppina Nucera
- Unit of Radiology, Department of Advanced Diagnostic-Therapeutic Technologies, "Bianchi-Melacrino-Morelli" Hospital, Reggio Calabria, Via Giuseppe Melacrino, 21, 89124, Reggio Calabria, Italy
| | - Michele Gaeta
- Department of Biomedical Sciences, Dental and of Morphological and Functional Images, University of Messina, Via Consolare Valeria 1, 98125, Messina, Italy
| | - Massimo Chiappini
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Maris Scarl, Via Vigna Murata 606, 00143, Rome, Italy.
| | - Giovanni Finocchio
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Department of Mathematical and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, V.le F. Stagno D'Alcontres 31, 98166, Messina, Italy.
| |
Collapse
|
25
|
Bennani S, Regnard NE, Ventre J, Lassalle L, Nguyen T, Ducarouge A, Dargent L, Guillo E, Gouhier E, Zaimi SH, Canniff E, Malandrin C, Khafagy P, Koulakian H, Revel MP, Chassagnon G. Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs. Radiology 2023; 309:e230860. [PMID: 38085079 DOI: 10.1148/radiol.230860] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Background Chest radiography remains the most common radiologic examination, and interpretation of its results can be difficult. Purpose To explore the potential benefit of artificial intelligence (AI) assistance in the detection of thoracic abnormalities on chest radiographs by evaluating the performance of radiologists with different levels of expertise, with and without AI assistance. Materials and Methods Patients who underwent both chest radiography and thoracic CT within 72 hours between January 2010 and December 2020 in a French public hospital were screened retrospectively. Radiographs were randomly included until reaching 500 radiographs, with about 50% of radiographs having abnormal findings. A senior thoracic radiologist annotated the radiographs for five abnormalities (pneumothorax, pleural effusion, consolidation, mediastinal and hilar mass, lung nodule) based on the corresponding CT results (ground truth). A total of 12 readers (four thoracic radiologists, four general radiologists, four radiology residents) read half the radiographs without AI and half the radiographs with AI (ChestView; Gleamer). Changes in sensitivity and specificity were measured using paired t tests. Results The study included 500 patients (mean age, 54 years ± 19 [SD]; 261 female, 239 male), with 522 abnormalities visible on 241 radiographs. On average, for all readers, AI use resulted in an absolute increase in sensitivity of 26% (95% CI: 20, 32), 14% (95% CI: 11, 17), 12% (95% CI: 10, 14), 8.5% (95% CI: 6, 11), and 5.9% (95% CI: 4, 8) for pneumothorax, consolidation, nodule, pleural effusion, and mediastinal and hilar mass, respectively (P < .001). Specificity increased with AI assistance (3.9% [95% CI: 3.2, 4.6], 3.7% [95% CI: 3, 4.4], 2.9% [95% CI: 2.3, 3.5], and 2.1% [95% CI: 1.6, 2.6] for pleural effusion, mediastinal and hilar mass, consolidation, and nodule, respectively), except in the diagnosis of pneumothorax (-0.2%; 95% CI: -0.36, -0.04; P = .01). The mean reading time was 81 seconds without AI versus 56 seconds with AI (31% decrease, P < .001). Conclusion AI-assisted chest radiography interpretation resulted in absolute increases in sensitivity for all radiologists of various levels of expertise and reduced the reading times; specificity increased with AI, except in the diagnosis of pneumothorax. © RSNA, 2023 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Souhail Bennani
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Nor-Eddine Regnard
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Jeanne Ventre
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Louis Lassalle
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Toan Nguyen
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Alexis Ducarouge
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Lucas Dargent
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Enora Guillo
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Elodie Gouhier
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Sophie-Hélène Zaimi
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Emma Canniff
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Cécile Malandrin
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Philippe Khafagy
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Hasmik Koulakian
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Marie-Pierre Revel
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Guillaume Chassagnon
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| |
Collapse
|
26
|
Bhat S, Mansoor A, Georgescu B, Panambur AB, Ghesu FC, Islam S, Packhäuser K, Rodríguez-Salas D, Grbic S, Maier A. AUCReshaping: improved sensitivity at high-specificity. Sci Rep 2023; 13:21097. [PMID: 38036602 PMCID: PMC10689839 DOI: 10.1038/s41598-023-48482-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/27/2023] [Indexed: 12/02/2023] Open
Abstract
The evaluation of deep-learning (DL) systems typically relies on the Area under the Receiver-Operating-Curve (AU-ROC) as a performance metric. However, AU-ROC, in its holistic form, does not sufficiently consider performance within specific ranges of sensitivity and specificity, which are critical for the intended operational context of the system. Consequently, two systems with identical AU-ROC values can exhibit significantly divergent real-world performance. This issue is particularly pronounced in the context of anomaly detection tasks, a commonly employed application of DL systems across various research domains, including medical imaging, industrial automation, manufacturing, cyber security, fraud detection, and drug research, among others. The challenge arises from the heavy class imbalance in training datasets, with the abnormality class often incurring a considerably higher misclassification cost compared to the normal class. Traditional DL systems address this by adjusting the weighting of the cost function or optimizing for specific points along the ROC curve. While these approaches yield reasonable results in many cases, they do not actively seek to maximize performance for the desired operating point. In this study, we introduce a novel technique known as AUCReshaping, designed to reshape the ROC curve exclusively within the specified sensitivity and specificity range, by optimizing sensitivity at a predetermined specificity level. This reshaping is achieved through an adaptive and iterative boosting mechanism that allows the network to focus on pertinent samples during the learning process. We primarily investigated the impact of AUCReshaping in the context of abnormality detection tasks, specifically in Chest X-Ray (CXR) analysis, followed by breast mammogram and credit card fraud detection tasks. The results reveal a substantial improvement, ranging from 2 to 40%, in sensitivity at high-specificity levels for binary classification tasks.
Collapse
Affiliation(s)
- Sheethal Bhat
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany.
| | - Awais Mansoor
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Bogdan Georgescu
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Adarsh B Panambur
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
| | - Florin C Ghesu
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Saahil Islam
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
| | - Kai Packhäuser
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Dalia Rodríguez-Salas
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Sasa Grbic
- Digital Technology and Innovation, Siemens Medical Solutions, Princeton, NJ, 08540, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| |
Collapse
|
27
|
Farhat F, Sohail SS, Alam MT, Ubaid S, Shakil, Ashhad M, Madsen DØ. COVID-19 and beyond: leveraging artificial intelligence for enhanced outbreak control. Front Artif Intell 2023; 6:1266560. [PMID: 38028660 PMCID: PMC10663297 DOI: 10.3389/frai.2023.1266560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 10/02/2023] [Indexed: 12/01/2023] Open
Abstract
COVID-19 has brought significant changes to our political, social, and technological landscape. This paper explores the emergence and global spread of the disease and focuses on the role of Artificial Intelligence (AI) in containing its transmission. To the best of our knowledge, there has been no scientific presentation of the early pictorial representation of the disease's spread. Additionally, we outline various domains where AI has made a significant impact during the pandemic. Our methodology involves searching relevant articles on COVID-19 and AI in leading databases such as PubMed and Scopus to identify the ways AI has addressed pandemic-related challenges and its potential for further assistance. While research suggests that AI has not fully realized its potential against COVID-19, likely due to data quality and diversity limitations, we review and identify key areas where AI has been crucial in preparing the fight against any sudden outbreak of the pandemic. We also propose ways to maximize the utilization of AI's capabilities in this regard.
Collapse
Affiliation(s)
- Faiza Farhat
- Department of Zoology, Aligarh Muslim University, Aligarh, India
| | | | - Mohammed Talha Alam
- Department of Computer Science and Engineering, Jamia Hamdard, New Delhi, India
| | - Syed Ubaid
- Faculty of Electronic and Information Technology, Warsaw University of Technology, Warsaw, Poland
| | - Shakil
- Faculty of Electronic and Information Technology, Warsaw University of Technology, Warsaw, Poland
| | - Mohd Ashhad
- Department of Computer Science and Engineering, Jamia Hamdard, New Delhi, India
| | - Dag Øivind Madsen
- USN School of Business, University of South-Eastern Norway, Hønefoss, Norway
| |
Collapse
|
28
|
Glocker B, Jones C, Roschewitz M, Winzeck S. Risk of Bias in Chest Radiography Deep Learning Foundation Models. Radiol Artif Intell 2023; 5:e230060. [PMID: 38074789 PMCID: PMC10698597 DOI: 10.1148/ryai.230060] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 08/07/2023] [Accepted: 08/24/2023] [Indexed: 03/15/2024]
Abstract
PURPOSE To analyze a recently published chest radiography foundation model for the presence of biases that could lead to subgroup performance disparities across biologic sex and race. MATERIALS AND METHODS This Health Insurance Portability and Accountability Act-compliant retrospective study used 127 118 chest radiographs from 42 884 patients (mean age, 63 years ± 17 [SD]; 23 623 male, 19 261 female) from the CheXpert dataset that were collected between October 2002 and July 2017. To determine the presence of bias in features generated by a chest radiography foundation model and baseline deep learning model, dimensionality reduction methods together with two-sample Kolmogorov-Smirnov tests were used to detect distribution shifts across sex and race. A comprehensive disease detection performance analysis was then performed to associate any biases in the features to specific disparities in classification performance across patient subgroups. RESULTS Ten of 12 pairwise comparisons across biologic sex and race showed statistically significant differences in the studied foundation model, compared with four significant tests in the baseline model. Significant differences were found between male and female (P < .001) and Asian and Black (P < .001) patients in the feature projections that primarily capture disease. Compared with average model performance across all subgroups, classification performance on the "no finding" label decreased between 6.8% and 7.8% for female patients, and performance in detecting "pleural effusion" decreased between 10.7% and 11.6% for Black patients. CONCLUSION The studied chest radiography foundation model demonstrated racial and sex-related bias, which led to disparate performance across patient subgroups; thus, this model may be unsafe for clinical applications.Keywords: Conventional Radiography, Computer Application-Detection/Diagnosis, Chest Radiography, Bias, Foundation Models Supplemental material is available for this article. Published under a CC BY 4.0 license.See also commentary by Czum and Parr in this issue.
Collapse
Affiliation(s)
- Ben Glocker
- From the Department of Computing, Imperial College London, South
Kensington Campus, London SW7 2AZ, United Kingdom
| | - Charles Jones
- From the Department of Computing, Imperial College London, South
Kensington Campus, London SW7 2AZ, United Kingdom
| | - Mélanie Roschewitz
- From the Department of Computing, Imperial College London, South
Kensington Campus, London SW7 2AZ, United Kingdom
| | - Stefan Winzeck
- From the Department of Computing, Imperial College London, South
Kensington Campus, London SW7 2AZ, United Kingdom
| |
Collapse
|
29
|
Ueda Y, Morishita J. Patient Identification Based on Deep Metric Learning for Preventing Human Errors in Follow-up X-Ray Examinations. J Digit Imaging 2023; 36:1941-1953. [PMID: 37308675 PMCID: PMC10501972 DOI: 10.1007/s10278-023-00850-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/14/2023] Open
Abstract
Biological fingerprints extracted from clinical images can be used for patient identity verification to determine misfiled clinical images in picture archiving and communication systems. However, such methods have not been incorporated into clinical use, and their performance can degrade with variability in the clinical images. Deep learning can be used to improve the performance of these methods. A novel method is proposed to automatically identify individuals among examined patients using posteroanterior (PA) and anteroposterior (AP) chest X-ray images. The proposed method uses deep metric learning based on a deep convolutional neural network (DCNN) to overcome the extreme classification requirements for patient validation and identification. It was trained on the NIH chest X-ray dataset (ChestX-ray8) in three steps: preprocessing, DCNN feature extraction with an EfficientNetV2-S backbone, and classification with deep metric learning. The proposed method was evaluated using two public datasets and two clinical chest X-ray image datasets containing data from patients undergoing screening and hospital care. A 1280-dimensional feature extractor pretrained for 300 epochs performed the best with an area under the receiver operating characteristic curve of 0.9894, an equal error rate of 0.0269, and a top-1 accuracy of 0.839 on the PadChest dataset containing both PA and AP view positions. The findings of this study provide considerable insights into the development of automated patient identification to reduce the possibility of medical malpractice due to human errors.
Collapse
Affiliation(s)
- Yasuyuki Ueda
- Department of Medical Physics and Engineering, Area of Medical Imaging Technology and Science, Graduate School of Medicine, Division of Health Sciences, Osaka University, Osaka, Japan.
| | - Junji Morishita
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
30
|
Li H, Wu Y, Hu H, Lu H, Huang Q, Wan S. Interpretable thoracic pathologic prediction via learning group-disentangled representation. Methods 2023; 218:110-117. [PMID: 37543302 DOI: 10.1016/j.ymeth.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 05/21/2023] [Accepted: 08/02/2023] [Indexed: 08/07/2023] Open
Abstract
Deep learning has brought a significant progress in medical image analysis. However, their lack of interpretability might bring high risk for wrong diagnosis with limited clinical knowledge embedding. In other words, we believe it's crucial for humans to interpret how deep learning work for medical analysis, thus appropriately adding knowledge constraints to correct the bias of wrong results. With such purpose, we propose Representation Group-Disentangling Network (RGD-Net) to explain the process of feature extraction and decision making inside deep learning framework, where we completely disentangle feature space of input X-ray images into independent feature groups, and each group would contribute to diagnose of a specific disease. Specifically, we first state problem definition for interpretable prediction with auto-encoder structure. Then, group-disentangled representations are extracted from input X-ray images with the proposed Group-Disentangle Module, which constructs semantic latent space by enforcing semantic consistency of attributes. Afterwards, adversarial constricts on mapping from features to diseases are proposed to prevent model collapse during training. Finally, a novel design of local tuning medical application is proposed based on RGB-Net, which is capable to aid clinicians for reasonable diagnosis. By conducting quantity of experiments on public datasets, RGD-Net have been superior to comparative studies by leveraging potential factors contributing to different diseases. We believe our work could bring interpretability in digging inherent patterns of deep learning on medical image analysis.
Collapse
Affiliation(s)
- Hao Li
- Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing 210093, China; College of Computer and Information, Hohai University, Nanjing 210093, China.
| | - Yirui Wu
- Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing 210093, China; College of Computer and Information, Hohai University, Nanjing 210093, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130015, China.
| | - Hexuan Hu
- Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing 210093, China; College of Computer and Information, Hohai University, Nanjing 210093, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130015, China.
| | - Hu Lu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China.
| | - Qian Huang
- Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing 210093, China; College of Computer and Information, Hohai University, Nanjing 210093, China.
| | - Shaohua Wan
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China.
| |
Collapse
|
31
|
Mukherjee P, Hou B, Lanfredi RB, Summers RM. Feasibility of Using the Privacy-preserving Large Language Model Vicuna for Labeling Radiology Reports. Radiology 2023; 309:e231147. [PMID: 37815442 PMCID: PMC10623189 DOI: 10.1148/radiol.231147] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/15/2023] [Accepted: 08/16/2023] [Indexed: 10/11/2023]
Abstract
Background Large language models (LLMs) such as ChatGPT, though proficient in many text-based tasks, are not suitable for use with radiology reports due to patient privacy constraints. Purpose To test the feasibility of using an alternative LLM (Vicuna-13B) that can be run locally for labeling radiography reports. Materials and Methods Chest radiography reports from the MIMIC-CXR and National Institutes of Health (NIH) data sets were included in this retrospective study. Reports were examined for 13 findings. Outputs reporting the presence or absence of the 13 findings were generated by Vicuna by using a single-step or multistep prompting strategy (prompts 1 and 2, respectively). Agreements between Vicuna outputs and CheXpert and CheXbert labelers were assessed using Fleiss κ. Agreement between Vicuna outputs from three runs under a hyperparameter setting that introduced some randomness (temperature, 0.7) was also assessed. The performance of Vicuna and the labelers was assessed in a subset of 100 NIH reports annotated by a radiologist with use of area under the receiver operating characteristic curve (AUC). Results A total of 3269 reports from the MIMIC-CXR data set (median patient age, 68 years [IQR, 59-79 years]; 161 male patients) and 25 596 reports from the NIH data set (median patient age, 47 years [IQR, 32-58 years]; 1557 male patients) were included. Vicuna outputs with prompt 2 showed, on average, moderate to substantial agreement with the labelers on the MIMIC-CXR (κ median, 0.57 [IQR, 0.45-0.66] with CheXpert and 0.64 [IQR, 0.45-0.68] with CheXbert) and NIH (κ median, 0.52 [IQR, 0.41-0.65] with CheXpert and 0.55 [IQR, 0.41-0.74] with CheXbert) data sets, respectively. Vicuna with prompt 2 performed at par (median AUC, 0.84 [IQR, 0.74-0.93]) with both labelers on nine of 11 findings. Conclusion In this proof-of-concept study, outputs of the LLM Vicuna reporting the presence or absence of 13 findings on chest radiography reports showed moderate to substantial agreement with existing labelers. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Cai in this issue.
Collapse
Affiliation(s)
- Pritam Mukherjee
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Benjamin Hou
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Ricardo B. Lanfredi
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| | - Ronald M. Summers
- From the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg 10, Room 1C224D, 10 Center Dr, Bethesda, MD 20892-1182
| |
Collapse
|
32
|
Kim MJ, Kim SH, Kim SM, Nam JH, Hwang YB, Lim YJ. The Advent of Domain Adaptation into Artificial Intelligence for Gastrointestinal Endoscopy and Medical Imaging. Diagnostics (Basel) 2023; 13:3023. [PMID: 37835766 PMCID: PMC10572560 DOI: 10.3390/diagnostics13193023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 09/01/2023] [Accepted: 09/12/2023] [Indexed: 10/15/2023] Open
Abstract
Artificial intelligence (AI) is a subfield of computer science that aims to implement computer systems that perform tasks that generally require human learning, reasoning, and perceptual abilities. AI is widely used in the medical field. The interpretation of medical images requires considerable effort, time, and skill. AI-aided interpretations, such as automated abnormal lesion detection and image classification, are promising areas of AI. However, when images with different characteristics are extracted, depending on the manufacturer and imaging environment, a so-called domain shift problem occurs in which the developed AI has a poor versatility. Domain adaptation is used to address this problem. Domain adaptation is a tool that generates a newly converted image which is suitable for other domains. It has also shown promise in reducing the differences in appearance among the images collected from different devices. Domain adaptation is expected to improve the reading accuracy of AI for heterogeneous image distributions in gastrointestinal (GI) endoscopy and medical image analyses. In this paper, we review the history and basic characteristics of domain shift and domain adaptation. We also address their use in gastrointestinal endoscopy and the medical field more generally through published examples, perspectives, and future directions.
Collapse
Affiliation(s)
- Min Ji Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Sang Hoon Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Suk Min Kim
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Ji Hyung Nam
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Young Bae Hwang
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Yun Jeong Lim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| |
Collapse
|
33
|
du Plessis T, Ramkilawon G, Rae WID, Botha T, Martinson NA, Dixon SAP, Kyme A, Sathekge MM. Introducing a secondary segmentation to construct a radiomics model for pulmonary tuberculosis cavities. LA RADIOLOGIA MEDICA 2023; 128:1093-1102. [PMID: 37474665 PMCID: PMC10474191 DOI: 10.1007/s11547-023-01681-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 07/07/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE Accurate segmentation (separating diseased portions of the lung from normal appearing lung) is a challenge in radiomic studies of non-neoplastic diseases, such as pulmonary tuberculosis (PTB). In this study, we developed a segmentation method, applicable to chest X-rays (CXR), that can eliminate the need for precise disease delineation, and that is effective for constructing radiomic models for automatic PTB cavity classification. METHODS This retrospective study used a dataset of 266 posteroanterior CXR of patients diagnosed with laboratory confirmed PTB. The lungs were segmented using a U-net-based in-house automatic segmentation model. A secondary segmentation was developed using a sliding window, superimposed on the primary lung segmentation. Pyradiomics was used for feature extraction from every window which increased the dimensionality of the data, but this allowed us to accurately capture the spread of the features across the lung. Two separate measures (standard-deviation and variance) were used to consolidate the features. Pearson's correlation analysis (with a 0.8 cut-off value) was then applied for dimensionality reduction followed by the construction of Random Forest radiomic models. RESULTS Two almost identical radiomic signatures consisting of 10 texture features each (9 were the same plus 1 other feature) were identified using the two separate consolidation measures. Two well performing random forest models were constructed from these signatures. The standard-deviation model (AUC = 0.9444 (95% CI, 0.8762; 0.9814)) performed marginally better than the variance model (AUC = 0.9288 (95% CI, 0.9046; 0.9843)). CONCLUSION The introduction of the secondary sliding window segmentation on CXR could eliminate the need for disease delineation in pulmonary radiomic studies, and it could improve the accuracy of CXR reporting currently regaining prominence as a high-volume screening tool as the developed radiomic models correctly classify cavities from normal CXR.
Collapse
Affiliation(s)
- Tamarisk du Plessis
- Department of Nuclear Medicine, Faculty of Health Sciences, University of Pretoria, Pretoria, South Africa.
| | - Gopika Ramkilawon
- Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Pretoria, Pretoria, South Africa
| | | | - Tanita Botha
- Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Pretoria, Pretoria, South Africa
| | - Neil Alexander Martinson
- Perinatal HIV Research Unit (PHRU), University of the Witwatersrand, Johannesburg, South Africa
- Johns Hopkins University Centre for TB Research, Baltimore, MD, USA
| | | | - Andre Kyme
- School of Biomedical Engineering, University of Sydney, Sydney, Australia
| | - Mike Michael Sathekge
- Department of Nuclear Medicine, Faculty of Health Sciences, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
34
|
Lind Plesner L, Müller FC, Brejnebøl MW, Laustrup LC, Rasmussen F, Nielsen OW, Boesen M, Brun Andersen M. Commercially Available Chest Radiograph AI Tools for Detecting Airspace Disease, Pneumothorax, and Pleural Effusion. Radiology 2023; 308:e231236. [PMID: 37750768 DOI: 10.1148/radiol.231236] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Background Commercially available artificial intelligence (AI) tools can assist radiologists in interpreting chest radiographs, but their real-life diagnostic accuracy remains unclear. Purpose To evaluate the diagnostic accuracy of four commercially available AI tools for detection of airspace disease, pneumothorax, and pleural effusion on chest radiographs. Materials and Methods This retrospective study included consecutive adult patients who underwent chest radiography at one of four Danish hospitals in January 2020. Two thoracic radiologists (or three, in cases of disagreement) who had access to all previous and future imaging labeled chest radiographs independently for the reference standard. Area under the receiver operating characteristic curve, sensitivity, and specificity were calculated. Sensitivity and specificity were additionally stratified according to the severity of findings, number of findings on chest radiographs, and radiographic projection. The χ2 and McNemar tests were used for comparisons. Results The data set comprised 2040 patients (median age, 72 years [IQR, 58-81 years]; 1033 female), of whom 669 (32.8%) had target findings. The AI tools demonstrated areas under the receiver operating characteristic curve ranging 0.83-0.88 for airspace disease, 0.89-0.97 for pneumothorax, and 0.94-0.97 for pleural effusion. Sensitivities ranged 72%-91% for airspace disease, 63%-90% for pneumothorax, and 62%-95% for pleural effusion. Negative predictive values ranged 92%-100% for all target findings. In airspace disease, pneumothorax, and pleural effusion, specificity was high for chest radiographs with normal or single findings (range, 85%-96%, 99%-100%, and 95%-100%, respectively) and markedly lower for chest radiographs with four or more findings (range, 27%-69%, 96%-99%, 65%-92%, respectively) (P < .001). AI sensitivity was lower for vague airspace disease (range, 33%-61%) and small pneumothorax or pleural effusion (range, 9%-94%) compared with larger findings (range, 81%-100%; P value range, > .99 to < .001). Conclusion Current-generation AI tools showed moderate to high sensitivity for detecting airspace disease, pneumothorax, and pleural effusion on chest radiographs. However, they produced more false-positive findings than radiology reports, and their performance decreased for smaller-sized target findings and when multiple findings were present. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Yanagawa and Tomiyama in this issue.
Collapse
Affiliation(s)
- Louis Lind Plesner
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Felix C Müller
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Mathias W Brejnebøl
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Lene C Laustrup
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Finn Rasmussen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Olav W Nielsen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Mikael Boesen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| | - Michael Brun Andersen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital Region of Denmark (L.L.P., F.C.M., M.W.B., M.B., M.B.A.); Departments of Radiology (M.W.B., M.B.) and Cardiology (O.W.N.), Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark; and Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.)
| |
Collapse
|
35
|
Jacobs PP, Ehrengut C, Bucher AM, Penzkofer T, Lukas M, Kleesiek J, Denecke T. Challenges in Implementing the Local Node Infrastructure for a National Federated Machine Learning Network in Radiology. Healthcare (Basel) 2023; 11:2377. [PMID: 37685411 PMCID: PMC10487228 DOI: 10.3390/healthcare11172377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/20/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Data-driven machine learning in medical research and diagnostics needs large-scale datasets curated by clinical experts. The generation of large datasets can be challenging in terms of resource consumption and time effort, while generalizability and validation of the developed models significantly benefit from variety in data sources. Training algorithms on smaller decentralized datasets through federated learning can reduce effort, but require the implementation of a specific and ambitious infrastructure to share data, algorithms and computing time. Additionally, it offers the opportunity of maintaining and keeping the data locally. Thus, data safety issues can be avoided because patient data must not be shared. Machine learning models are trained on local data by sharing the model and through an established network. In addition to commercial applications, there are also numerous academic and customized implementations of network infrastructures available. The configuration of these networks primarily differs, yet adheres to a standard framework composed of fundamental components. In this technical note, we propose basic infrastructure requirements for data governance, data science workflows, and local node set-up, and report on the advantages and experienced pitfalls in implementing the local infrastructure with the German Radiological Cooperative Network initiative as the use case example. We show how the infrastructure can be built upon some base components to reflect the needs of a federated learning network and how they can be implemented considering both local and global network requirements. After analyzing the deployment process in different settings and scenarios, we recommend integrating the local node into an existing clinical IT infrastructure. This approach offers benefits in terms of maintenance and deployment effort compared to external integration in a separate environment (e.g., the radiology department). This proposed groundwork can be taken as an exemplary development guideline for future applications of federated learning networks in clinical and scientific environments.
Collapse
Affiliation(s)
- Paul-Philipp Jacobs
- Department of Diagnostic and Interventional Radiology, University of Leipzig, 04109 Leipzig, Germany
| | - Constantin Ehrengut
- Department of Diagnostic and Interventional Radiology, University of Leipzig, 04109 Leipzig, Germany
| | - Andreas Michael Bucher
- Department of Diagnostic and Interventional Radiology, Johann-Wolfgang-v.-Goethe-Universität, 60629 Frankfurt, Germany
| | - Tobias Penzkofer
- Department of Radiology, Campus Virchow-Klinikum, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
| | - Mathias Lukas
- Department of Diagnostic and Interventional Radiology, University of Leipzig, 04109 Leipzig, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), 45131 Essen, Germany
- Medical Faculty, University of Duisburg-Essen, 45122 Essen, Germany
| | - Timm Denecke
- Department of Diagnostic and Interventional Radiology, University of Leipzig, 04109 Leipzig, Germany
| |
Collapse
|
36
|
Chen YHJ, Lin CS, Lin C, Tsai DJ, Fang WH, Lee CC, Wang CH, Chen SJ. An AI-Enabled Dynamic Risk Stratification for Emergency Department Patients with ECG and CXR Integration. J Med Syst 2023; 47:81. [PMID: 37523102 DOI: 10.1007/s10916-023-01980-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/21/2023] [Indexed: 08/01/2023]
Abstract
Emergency department (ED) triage scale determines the priority of patient care and foretells the prognosis. However, the information retrieved from the initial assessment is limited, hindering the risk identification accuracy of triage. Therefore, we sought to develop a 'dynamic' triage system as secondary screening, using artificial intelligence (AI) techniques to integrate information from initial assessment data and subsequent examinations. This retrospective cohort study included 134,112 ED visits with at least one electrocardiography (ECG) and chest X-ray (CXR) in a medical center from 2012 to 2022. Additionally, an independent community hospital provided 45,614 ED visits as an external validation set. We trained an eXtreme gradient boosting (XGB) model using initial assessment data to predict all-cause mortality in 7 days. Two deep learning models (DLMs) using ECG and CXR were trained to stratify mortality risks. The dynamic triage levels were based on output from the XGB-triage and DLMs from ECG and CXR. During the internal and external validation, the area under the receiver operating characteristic curve (AUC) of the XGB-triage model was >0.866; furthermore, the AUCs of DLMs using ECG and CXR were >0.862 and >0.886, respectively. The dynamic triage scale provided a higher C-index (0.914-0.920 vs. 0.827-0.843) than the original one and demonstrated better predictive ability for 5-year mortality, 30-day ED revisit, and 30-day discharge. The AI-based risk scale provides a more accurate and dynamic stratification of mortality risk in ED patients, particularly in identifying patients who tend to be overlooked due to atypical symptoms.
Collapse
Affiliation(s)
| | - Chin-Sheng Lin
- Division of Cardiology, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center Taipei, Taipei, Taiwan
- Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan
| | - Chin Lin
- School of Public Health, National Defense Medical Center, Taipei, Taiwan
- Medical Technology Education Center, School of Medicine, National Defense Medical Center, Taipei, Taiwan
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan
| | - Dung-Jang Tsai
- Center for Artificial Intelligence and Internet of Things, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
- Department of Statistics and Information Science, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Wen-Hui Fang
- Center for Artificial Intelligence and Internet of Things, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
- Department of Family and Community Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chia-Cheng Lee
- Medical Informatics Office, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chih-Hung Wang
- Graduate Institutes of Life Sciences, National Defense Medical Center, Taipei, Taiwan
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Sy-Jou Chen
- Department of Emergency Medicine, Tri-Service General Hospital, National Defense Medical Center, No.161, Sec. 6, Minquan E. Rd., Neihu Dist., Taipei City, 11490, Taiwan.
- Graduate Institute of Injury Prevention and Control, College of Public Health and Nutrition, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
37
|
Feyisa DW, Ayano YM, Debelee TG, Schwenker F. Weak Localization of Radiographic Manifestations in Pulmonary Tuberculosis from Chest X-ray: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:6781. [PMID: 37571564 PMCID: PMC10422452 DOI: 10.3390/s23156781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/13/2023]
Abstract
Pulmonary tuberculosis (PTB) is a bacterial infection that affects the lung. PTB remains one of the infectious diseases with the highest global mortalities. Chest radiography is a technique that is often employed in the diagnosis of PTB. Radiologists identify the severity and stage of PTB by inspecting radiographic features in the patient's chest X-ray (CXR). The most common radiographic features seen on CXRs include cavitation, consolidation, masses, pleural effusion, calcification, and nodules. Identifying these CXR features will help physicians in diagnosing a patient. However, identifying these radiographic features for intricate disorders is challenging, and the accuracy depends on the radiologist's experience and level of expertise. So, researchers have proposed deep learning (DL) techniques to detect and mark areas of tuberculosis infection in CXRs. DL models have been proposed in the literature because of their inherent capacity to detect diseases and segment the manifestation regions from medical images. However, fully supervised semantic segmentation requires several pixel-by-pixel labeled images. The annotation of such a large amount of data by trained physicians has some challenges. First, the annotation requires a significant amount of time. Second, the cost of hiring trained physicians is expensive. In addition, the subjectivity of medical data poses a difficulty in having standardized annotation. As a result, there is increasing interest in weak localization techniques. Therefore, in this review, we identify methods employed in the weakly supervised segmentation and localization of radiographic manifestations of pulmonary tuberculosis from chest X-rays. First, we identify the most commonly used public chest X-ray datasets for tuberculosis identification. Following that, we discuss the approaches for weakly localizing tuberculosis radiographic manifestations in chest X-rays. The weakly supervised localization of PTB can highlight the region of the chest X-ray image that contributed the most to the DL model's classification output and help pinpoint the diseased area. Finally, we discuss the limitations and challenges of weakly supervised techniques in localizing TB manifestations regions in chest X-ray images.
Collapse
Affiliation(s)
- Degaga Wolde Feyisa
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Yehualashet Megersa Ayano
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Taye Girma Debelee
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
- Department of Electrical and Computer Engineering, Addis Ababa Science and Technology University, Addis Ababa P.O. Box 120611, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
38
|
Tang CHM, Seah JCY, Ahmad HK, Milne MR, Wardman JB, Buchlak QD, Esmaili N, Lambert JF, Jones CM. Analysis of Line and Tube Detection Performance of a Chest X-ray Deep Learning Model to Evaluate Hidden Stratification. Diagnostics (Basel) 2023; 13:2317. [PMID: 37510062 PMCID: PMC10378683 DOI: 10.3390/diagnostics13142317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
This retrospective case-control study evaluated the diagnostic performance of a commercially available chest radiography deep convolutional neural network (DCNN) in identifying the presence and position of central venous catheters, enteric tubes, and endotracheal tubes, in addition to a subgroup analysis of different types of lines/tubes. A held-out test dataset of 2568 studies was sourced from community radiology clinics and hospitals in Australia and the USA, and was then ground-truth labelled for the presence, position, and type of line or tube from the consensus of a thoracic specialist radiologist and an intensive care clinician. DCNN model performance for identifying and assessing the positioning of central venous catheters, enteric tubes, and endotracheal tubes over the entire dataset, as well as within each subgroup, was evaluated. The area under the receiver operating characteristic curve (AUC) was assessed. The DCNN algorithm displayed high performance in detecting the presence of lines and tubes in the test dataset with AUCs > 0.99, and good position classification performance over a subpopulation of ground truth positive cases with AUCs of 0.86-0.91. The subgroup analysis showed that model performance was robust across the various subtypes of lines or tubes, although position classification performance of peripherally inserted central catheters was relatively lower. Our findings indicated that the DCNN algorithm performed well in the detection and position classification of lines and tubes, supporting its use as an assistant for clinicians. Further work is required to evaluate performance in rarer scenarios, as well as in less common subgroups.
Collapse
Affiliation(s)
- Cyril H M Tang
- Annalise.ai, Sydney, NSW 2000, Australia
- Intensive Care Unit, Gosford Hospital, Sydney, NSW 2250, Australia
| | - Jarrel C Y Seah
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, Alfred Health, Melbourne, VIC 3004, Australia
| | | | | | | | - Quinlan D Buchlak
- Annalise.ai, Sydney, NSW 2000, Australia
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Department of Neurosurgery, Monash Health, Melbourne, VIC 3168, Australia
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | | | - Catherine M Jones
- Annalise.ai, Sydney, NSW 2000, Australia
- I-MED Radiology Network, Brisbane, QLD 4006, Australia
- School of Public and Preventive Health, Monash University, Clayton, VIC 3800, Australia
- Department of Clinical Imaging Science, University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
39
|
Shamji MH, Ollert M, Adcock IM, Bennett O, Favaro A, Sarama R, Riggioni C, Annesi-Maesano I, Custovic A, Fontanella S, Traidl-Hoffmann C, Nadeau K, Cecchi L, Zemelka-Wiacek M, Akdis CA, Jutel M, Agache I. EAACI guidelines on environmental science in allergic diseases and asthma - Leveraging artificial intelligence and machine learning to develop a causality model in exposomics. Allergy 2023; 78:1742-1757. [PMID: 36740916 DOI: 10.1111/all.15667] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/17/2023] [Accepted: 02/01/2023] [Indexed: 02/07/2023]
Abstract
Allergic diseases and asthma are intrinsically linked to the environment we live in and to patterns of exposure. The integrated approach to understanding the effects of exposures on the immune system includes the ongoing collection of large-scale and complex data. This requires sophisticated methods to take full advantage of what this data can offer. Here we discuss the progress and further promise of applying artificial intelligence and machine-learning approaches to help unlock the power of complex environmental data sets toward providing causality models of exposure and intervention. We discuss a range of relevant machine-learning paradigms and models including the way such models are trained and validated together with examples of machine learning applied to allergic disease in the context of specific environmental exposures as well as attempts to tie these environmental data streams to the full representative exposome. We also discuss the promise of artificial intelligence in personalized medicine and the methodological approaches to healthcare with the final AI to improve public health.
Collapse
Affiliation(s)
- Mohamed H Shamji
- National Heart and Lung Institute, Imperial College London, London, UK
- NIHR Imperial Biomedical Research Centre, London, UK
| | - Markus Ollert
- Department of Infection and Immunity, Luxembourg Institute of Health (LIH), Esch-sur-Alzette, Luxembourg
- Department of Dermatology and Allergy Center, Odense Research Center for Anaphylaxis (ORCA), University of Southern Denmark, Odense, Denmark
| | - Ian M Adcock
- National Heart and Lung Institute, Imperial College London, London, UK
- NIHR Imperial Biomedical Research Centre, London, UK
| | | | | | - Roudin Sarama
- National Heart and Lung Institute, Imperial College London, London, UK
- NIHR Imperial Biomedical Research Centre, London, UK
| | - Carmen Riggioni
- Pediatric Allergy and Clinical Immunology Service, Institut de Reserca Sant Joan de Deú, Barcelona, Spain
| | - Isabella Annesi-Maesano
- Research Director and Deputy DIrector of Institut Desbrest of Epidemiology and Public Health (IDESP) French NIH (INSERM) and University of Montpellier, Montpellier, France
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- NIHR Imperial Biomedical Research Centre, London, UK
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- NIHR Imperial Biomedical Research Centre, London, UK
| | - Claudia Traidl-Hoffmann
- Environmental Medicine Faculty of Medicine University of Augsburg, Augsburg, Germany
- CK-CARE, Christine Kühne Center for Allergy Research and Education, Davos, Switzerland
| | - Kari Nadeau
- Sean N. Parker Center for Allergy and Asthma Research, Stanford University School of Medicine, Stanford, California, USA
| | - Lorenzo Cecchi
- SOS Allergology and Clinical Immunology, USL Toscana Centro, Prato, Italy
| | | | - Cezmi A Akdis
- Swiss Institute of Allergy and Asthma Research (SIAF), University Zurich, Davos, Switzerland
| | - Marek Jutel
- Department of Clinical Immunology, Wroclaw Medical University, Wroclaw, Poland
- ALL-MED Medical Research Institute, Wroclaw, Poland
| | - Ioana Agache
- Faculty of Medicine, Transylvania University, Brasov, Romania
| |
Collapse
|
40
|
Behrendt F, Bengs M, Bhattacharya D, Krüger J, Opfer R, Schlaefer A. A systematic approach to deep learning-based nodule detection in chest radiographs. Sci Rep 2023; 13:10120. [PMID: 37344565 DOI: 10.1038/s41598-023-37270-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.
Collapse
Affiliation(s)
- Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany.
| | - Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | - Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | | | | | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| |
Collapse
|
41
|
Ghali R, Akhloufi MA. Vision Transformers for Lung Segmentation on CXR Images. SN COMPUTER SCIENCE 2023; 4:414. [PMID: 37252339 PMCID: PMC10206550 DOI: 10.1007/s42979-023-01848-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/17/2023] [Indexed: 05/31/2023]
Abstract
Accurate segmentation of the lungs in CXR images is the basis for an automated CXR image analysis system. It helps radiologists in detecting lung areas, subtle signs of disease and improving the diagnosis process for patients. However, precise semantic segmentation of lungs is considered a challenging case due to the presence of the edge rib cage, wide variation of lung shape, and lungs affected by diseases. In this paper, we address the problem of lung segmentation in healthy and unhealthy CXR images. Five models were developed and used in detecting and segmenting lung regions. Two loss functions and three benchmark datasets were employed to evaluate these models. Experimental results showed that the proposed models were able to extract salient global and local features from the input CXR images. The best performing model achieved an F1 score of 97.47%, outperforming recent published models. They proved their ability to separate lung regions from the rib cage and clavicle edges and segment varying lung shape depending on age and gender, as well as challenging cases of lungs affected by anomalies such as tuberculosis and the presence of nodules.
Collapse
Affiliation(s)
- Rafik Ghali
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| | - Moulay A. Akhloufi
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| |
Collapse
|
42
|
Tang JSN, Lai JKC, Bui J, Wang W, Simkin P, Gai D, Chan J, Pascoe DM, Heinze SB, Gaillard F, Lui E. Impact of Different Artificial Intelligence User Interfaces on Lung Nodule and Mass Detection on Chest Radiographs. Radiol Artif Intell 2023; 5:e220079. [PMID: 37293345 PMCID: PMC10245182 DOI: 10.1148/ryai.220079] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 02/07/2023] [Accepted: 03/02/2023] [Indexed: 06/10/2023]
Abstract
Purpose To explore the impact of different user interfaces (UIs) for artificial intelligence (AI) outputs on radiologist performance and user preference in detecting lung nodules and masses on chest radiographs. Materials and Methods A retrospective paired-reader study with a 4-week washout period was used to evaluate three different AI UIs compared with no AI output. Ten radiologists (eight radiology attending physicians and two trainees) evaluated 140 chest radiographs (81 with histologically confirmed nodules and 59 confirmed as normal with CT), with either no AI or one of three UI outputs: (a) text-only, (b) combined AI confidence score and text, or (c) combined text, AI confidence score, and image overlay. Areas under the receiver operating characteristic curve were calculated to compare radiologist diagnostic performance with each UI with their diagnostic performance without AI. Radiologists reported their UI preference. Results The area under the receiver operating characteristic curve improved when radiologists used the text-only output compared with no AI (0.87 vs 0.82; P < .001). There was no difference in performance for the combined text and AI confidence score output compared with no AI (0.77 vs 0.82; P = .46) and for the combined text, AI confidence score, and image overlay output compared with no AI (0.80 vs 0.82; P = .66). Eight of the 10 radiologists (80%) preferred the combined text, AI confidence score, and image overlay output over the other two interfaces. Conclusion Text-only UI output significantly improved radiologist performance compared with no AI in the detection of lung nodules and masses on chest radiographs, but user preference did not correspond with user performance.Keywords: Artificial Intelligence, Chest Radiograph, Conventional Radiography, Lung Nodule, Mass Detection© RSNA, 2023.
Collapse
|
43
|
Cai Y, Chen H, Yang X, Zhou Y, Cheng KT. Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images. Med Image Anal 2023; 86:102794. [PMID: 36934507 DOI: 10.1016/j.media.2023.102794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 02/13/2023] [Accepted: 03/06/2023] [Indexed: 03/14/2023]
Abstract
Medical anomaly detection is a crucial yet challenging task aimed at recognizing abnormal images to assist in diagnosis. Due to the high-cost annotations of abnormal images, most methods utilize only known normal images during training and identify samples deviating from the normal profile as anomalies in the testing phase. Many readily available unlabeled images containing anomalies are thus ignored in the training phase, restricting the performance. To solve this problem, we introduce one-class semi-supervised learning (OC-SSL) to utilize known normal and unlabeled images for training, and propose Dual-distribution Discrepancy for Anomaly Detection (DDAD) based on this setting. Ensembles of reconstruction networks are designed to model the distribution of normal images and the distribution of both normal and unlabeled images, deriving the normative distribution module (NDM) and unknown distribution module (UDM). Subsequently, the intra-discrepancy of NDM and inter-discrepancy between the two modules are designed as anomaly scores. Furthermore, we propose a new perspective on self-supervised learning, which is designed to refine the anomaly scores rather than directly detect anomalies. Five medical datasets, including chest X-rays, brain MRIs and retinal fundus images, are organized as benchmarks for evaluation. Experiments on these benchmarks comprehensively compare a wide range of anomaly detection methods and demonstrate that our method achieves significant gains and outperforms the state-of-the-art. Code and organized benchmarks are available at https://github.com/caiyu6666/DDAD-ASR.
Collapse
Affiliation(s)
- Yu Cai
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Xin Yang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Yu Zhou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Kwang-Ting Cheng
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
44
|
Hong S, Hwang EJ, Kim S, Song J, Lee T, Jo GD, Choi Y, Park CM, Goo JM. Methods of Visualizing the Results of an Artificial-Intelligence-Based Computer-Aided Detection System for Chest Radiographs: Effect on the Diagnostic Performance of Radiologists. Diagnostics (Basel) 2023; 13:diagnostics13061089. [PMID: 36980397 PMCID: PMC10046978 DOI: 10.3390/diagnostics13061089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 03/02/2023] [Accepted: 03/12/2023] [Indexed: 03/16/2023] Open
Abstract
It is unclear whether the visualization methods for artificial-intelligence-based computer-aided detection (AI-CAD) of chest radiographs influence the accuracy of readers’ interpretation. We aimed to evaluate the accuracy of radiologists’ interpretations of chest radiographs using different visualization methods for the same AI-CAD. Initial chest radiographs of patients with acute respiratory symptoms were retrospectively collected. A commercialized AI-CAD using three different methods of visualizing was applied: (a) closed-line method, (b) heat map method, and (c) combined method. A reader test was conducted with five trainee radiologists over three interpretation sessions. In each session, the chest radiographs were interpreted using AI-CAD with one of the three visualization methods in random order. Examination-level sensitivity and accuracy, and lesion-level detection rates for clinically significant abnormalities were evaluated for the three visualization methods. The sensitivity (p = 0.007) and accuracy (p = 0.037) of the combined method are significantly higher than that of the closed-line method. Detection rates using the heat map method (p = 0.043) and the combined method (p = 0.004) are significantly higher than those using the closed-line method. The methods for visualizing AI-CAD results for chest radiographs influenced the performance of radiologists’ interpretations. Combining the closed-line and heat map methods for visualizing AI-CAD results led to the highest sensitivity and accuracy of radiologists.
Collapse
Affiliation(s)
- Sungho Hong
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Correspondence: ; Tel.: +82-2-2072-2057
| | - Soojin Kim
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Jiyoung Song
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Taehee Lee
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Gyeong Deok Jo
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Yelim Choi
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03082, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul 03082, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03082, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03082, Republic of Korea
| |
Collapse
|
45
|
Xue Z, Yang F, Rajaraman S, Zamzmi G, Antani S. Cross Dataset Analysis of Domain Shift in CXR Lung Region Detection. Diagnostics (Basel) 2023; 13:diagnostics13061068. [PMID: 36980375 PMCID: PMC10047562 DOI: 10.3390/diagnostics13061068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/03/2023] [Accepted: 03/07/2023] [Indexed: 03/18/2023] Open
Abstract
Domain shift is one of the key challenges affecting reliability in medical imaging-based machine learning predictions. It is of significant importance to investigate this issue to gain insights into its characteristics toward determining controllable parameters to minimize its impact. In this paper, we report our efforts on studying and analyzing domain shift in lung region detection in chest radiographs. We used five chest X-ray datasets, collected from different sources, which have manual markings of lung boundaries in order to conduct extensive experiments toward this goal. We compared the characteristics of these datasets from three aspects: information obtained from metadata or an image header, image appearance, and features extracted from a pretrained model. We carried out experiments to evaluate and compare model performances within each dataset and across datasets in four scenarios using different combinations of datasets. We proposed a new feature visualization method to provide explanations for the applied object detection network on the obtained quantitative results. We also examined chest X-ray modality-specific initialization, catastrophic forgetting, and model repeatability. We believe the observations and discussions presented in this work could help to shed some light on the importance of the analysis of training data for medical imaging machine learning research, and could provide valuable guidance for domain shift analysis.
Collapse
|
46
|
Anomaly Detection in Chest X-rays Based on Dual-Attention Mechanism and Multi-Scale Feature Fusion. Symmetry (Basel) 2023. [DOI: 10.3390/sym15030668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
The efficient and automatic detection of chest abnormalities is vital for the auxiliary diagnosis of medical images. Many studies utilize computer vision and deep learning approaches involving symmetry and asymmetry concepts to detect chest abnormalities, and achieve promising findings. However, an accurate instance-level and multi-label detection of abnormalities in chest X-rays remains a significant challenge. Here, a novel anomaly detection method for symmetric chest X-rays using dual-attention and multi-scale feature fusion is proposed. Three aspects of our method should be noted in comparison with the previous approaches. We improved the deep neural network with channel-dimensional and spatial-dimensional attention to capture the abundant contextual features. We then used an optimized multi-scale learning framework for feature fusion to adapt to the scale variation in the abnormalities. Considering the influence of the data imbalance and other factors, we introduced a seesaw loss function to flexibly adjust the sample weights and enhance the model learning efficiency. The rigorous experimental evaluation of a public chest X-ray dataset with fourteen different types of abnormalities demonstrates that our model has a mean average precision of 0.362 and outperforms existing methods.
Collapse
|
47
|
Kim H, Jin KN, Yoo SJ, Lee CH, Lee SM, Hong H, Witanto JN, Yoon SH. Deep Learning for Estimating Lung Capacity on Chest Radiographs Predicts Survival in Idiopathic Pulmonary Fibrosis. Radiology 2023; 306:e220292. [PMID: 36283113 DOI: 10.1148/radiol.220292] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Total lung capacity (TLC) has been estimated with use of chest radiographs based on time-consuming methods, such as planimetric techniques and manual measurements. Purpose To develop a deep learning-based, multidimensional model capable of estimating TLC from chest radiographs and demographic variables and validate its technical performance and clinical utility with use of multicenter retrospective data sets. Materials and Methods A deep learning model was pretrained with use of 50 000 consecutive chest CT scans performed between January 2015 and June 2017. The model was fine-tuned on 3523 pairs of posteroanterior chest radiographs and plethysmographic TLC measurements from consecutive patients who underwent pulmonary function testing on the same day. The model was tested with multicenter retrospective data sets from two tertiary care centers and one community hospital, including (a) an external test set 1 (n = 207) and external test set 2 (n = 216) for technical performance and (b) patients with idiopathic pulmonary fibrosis (n = 217) for clinical utility. Technical performance was evaluated with use of various agreement measures, and clinical utility was assessed in terms of the prognostic value for overall survival with use of multivariable Cox regression. Results The mean absolute difference and within-subject SD between observed and estimated TLC were 0.69 L and 0.73 L, respectively, in the external test set 1 (161 men; median age, 70 years [IQR: 61-76 years]) and 0.52 L and 0.53 L in the external test set 2 (113 men; median age, 63 years [IQR: 51-70 years]). In patients with idiopathic pulmonary fibrosis (145 men; median age, 67 years [IQR: 61-73 years]), greater estimated TLC percentage was associated with lower mortality risk (adjusted hazard ratio, 0.97 per percent; 95% CI: 0.95, 0.98; P < .001). Conclusion A fully automatic, deep learning-based model estimated total lung capacity from chest radiographs, and the model predicted survival in idiopathic pulmonary fibrosis. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Sorkness in this issue.
Collapse
Affiliation(s)
- Hyungjin Kim
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Kwang Nam Jin
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Seung-Jin Yoo
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Chang Hoon Lee
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Sang-Min Lee
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Hyunsook Hong
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Joseph Nathanael Witanto
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| | - Soon Ho Yoon
- From the Department of Radiology (H.K., S.H.Y.), Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine (C.H.L., S.M.L.), and Medical Research Collaborating Center (H.H.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Korea (H.K., K.N.J., S.H.Y.); Department of Radiology, SMG-SNU Boramae Medical Center, Seoul, Korea (K.N.J.); Department of Radiology, Hanyang University Medical Center, Seoul, Korea (S.J.Y.); and MEDICAL IP, Seoul, Korea (J.N.W., S.H.Y.)
| |
Collapse
|
48
|
Ohshima H, Mishima K. Oral biosciences: The annual review 2022. J Oral Biosci 2023; 65:1-12. [PMID: 36740188 DOI: 10.1016/j.job.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND The Journal of Oral Biosciences is devoted to advancing and disseminating fundamental knowledge concerning every aspect of oral biosciences. HIGHLIGHT This review features review articles in the fields of "Bone Cell Biology," "Tooth Development & Regeneration," "Tooth Bleaching," "Adipokines," "Milk Thistle," "Epithelial-Mesenchymal Transition," "Periodontitis," "Diagnosis," "Salivary Glands," "Tooth Root," "Exosome," "New Perspectives of Tooth Identification," "Dental Pulp," and "Saliva" in addition to the review articles by the winner of the "Lion Dental Research Award" ("Plastic changes in nociceptive pathways contributing to persistent orofacial pain") presented by the Japanese Association for Oral Biology. CONCLUSION The review articles in the Journal of Oral Biosciences have inspired its readers to broaden their knowledge about various aspects of oral biosciences. The current editorial review introduces these exciting review articles.
Collapse
Affiliation(s)
- Hayato Ohshima
- Division of Anatomy and Cell Biology of the Hard Tissue, Department of Tissue Regeneration and Reconstruction, Niigata University Graduate School of Medical and Dental Sciences, 2-5274 Gakkocho-dori, Chuo-ku, Niigata 951-8514, Japan.
| | - Kenji Mishima
- Division of Pathology, Department of Oral Diagnostic Sciences, Showa University School of Dentistry, 1-5-8, Hatanodai, Shinagawa-ku, Tokyo 142-8555, Japan
| |
Collapse
|
49
|
Berg A, Vandersmissen E, Wimmer M, Major D, Neubauer T, Lenis D, Cant J, Snoeckx A, Bühler K. Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods. Comput Biol Med 2023; 154:106543. [PMID: 36682179 DOI: 10.1016/j.compbiomed.2023.106543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 12/15/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality, they produce unnaturally deformed warp fields impacting visualization of differences between moving and fixed images. We aim to overcome these limitations, through a new paradigm based on individual rib pair segmentation for anatomy penalized registration. Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art while increasing the overlap of ribs by more than 25%, implying difference images showing pathological changes overlooked by other methods. We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set, starting from less than 25 fully and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.
Collapse
Affiliation(s)
- Astrid Berg
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Eva Vandersmissen
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Maria Wimmer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - David Major
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Theresa Neubauer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Dimitrios Lenis
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Jeroen Cant
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Drie Eikenstraat 655, 2650 Edegem, Belgium; Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk, Belgium.
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| |
Collapse
|
50
|
Gefter WB, Post BA, Hatabu H. Commonly Missed Findings on Chest Radiographs: Causes and Consequences. Chest 2023; 163:650-661. [PMID: 36521560 PMCID: PMC10154905 DOI: 10.1016/j.chest.2022.10.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 09/14/2022] [Accepted: 10/09/2022] [Indexed: 12/14/2022] Open
Abstract
Chest radiography (CXR) continues to be the most frequently performed imaging examination worldwide, yet it remains prone to frequent errors in interpretation. These pose potential adverse consequences to patients and are a leading motivation for medical malpractice lawsuits. Commonly missed CXR findings and the principal causes of these errors are reviewed and illustrated. Perceptual errors are the predominant source of these missed findings. The medicolegal implications of such errors are explained. Awareness of commonly missed CXR findings, their causes, and their consequences are important in developing approaches to reduce and mitigate these errors.
Collapse
Affiliation(s)
- Warren B Gefter
- Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, PA
| | | | - Hiroto Hatabu
- Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA.
| |
Collapse
|