1
|
Stueckle CA, Haage P. The radiologist as a physician - artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians - a narrative review. ROFO-FORTSCHR RONTG 2024. [PMID: 38569517 DOI: 10.1055/a-2271-0799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
BACKGROUND Large volumes of data increasing over time lead to a shortage of radiologists' time. The use of systems based on artificial intelligence (AI) offers opportunities to relieve the burden on radiologists. The AI systems are usually optimized for a radiological area. Radiologists must understand the basic features of its technical function in order to be able to assess the weaknesses and possible errors of the system and use the strengths of the system. This "explainability" creates trust in an AI system and shows its limits. METHOD Based on an expanded Medline search for the key words "radiology, artificial intelligence, referring physician interaction, patient interaction, job satisfaction, communication of findings, expectations", subjective additional relevant articles were considered for this narrative review. RESULTS The use of AI is well advanced, especially in radiology. The programmer should provide the radiologist with clear explanations as to how the system works. All systems on the market have strengths and weaknesses. Some of the optimizations are unintentionally specific, as they are often adapted too precisely to a certain environment that often does not exist in practice - this is known as "overfitting". It should also be noted that there are specific weak points in the systems, so-called "adversarial examples", which lead to fatal misdiagnoses by the AI even though these cannot be visually distinguished from an unremarkable finding by the radiologist. The user must know which diseases the system is trained for, which organ systems are recognized and taken into account by the AI, and, accordingly, which are not properly assessed. This means that the user can and must critically review the results and adjust the findings if necessary. Correctly applied AI can result in a time savings for the radiologist. If he knows how the system works, he only has to spend a short amount of time checking the results. The time saved can be used for communication with patients and referring physicians and thus contribute to higher job satisfaction. CONCLUSION Radiology is a constantly evolving specialty with enormous responsibility, as radiologists often make the diagnosis to be treated. AI-supported systems should be used consistently to provide relief and support. Radiologists need to know the strengths, weaknesses, and areas of application of these AI systems in order to save time. The time gained can be used for communication with patients and referring physicians. KEY POINTS · Explainable AI systems help to improve workflow and to save time.. · The physician must critically review AI results, under consideration of the limitations of the AI.. · The AI system will only provide useful results if it has been adapted to the data type and data origin.. · The communicating radiologist interested in the patient is important for the visibility of the discipline.. CITATION FORMAT · Stueckle CA, Haage P. The radiologist as a physician - artificial intelligence as a way to overcome tension between the patient, technology, and referring physicians - a narrative review. Fortschr Röntgenstr 2024; DOI: 10.1055/a-2271-0799.
Collapse
Affiliation(s)
| | - Patrick Haage
- Diagnostic and Interventional Radiology, HELIOS Universitätsklinikum Wuppertal, Germany
| |
Collapse
|
2
|
Malik H, Anees T. Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds. PLoS One 2024; 19:e0296352. [PMID: 38470893 DOI: 10.1371/journal.pone.0296352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 12/11/2023] [Indexed: 03/14/2024] Open
Abstract
Chest disease refers to a wide range of conditions affecting the lungs, such as COVID-19, lung cancer (LC), consolidation lung (COL), and many more. When diagnosing chest disorders medical professionals may be thrown off by the overlapping symptoms (such as fever, cough, sore throat, etc.). Additionally, researchers and medical professionals make use of chest X-rays (CXR), cough sounds, and computed tomography (CT) scans to diagnose chest disorders. The present study aims to classify the nine different conditions of chest disorders, including COVID-19, LC, COL, atelectasis (ATE), tuberculosis (TB), pneumothorax (PNEUTH), edema (EDE), pneumonia (PNEU). Thus, we suggested four novel convolutional neural network (CNN) models that train distinct image-level representations for nine different chest disease classifications by extracting features from images. Furthermore, the proposed CNN employed several new approaches such as a max-pooling layer, batch normalization layers (BANL), dropout, rank-based average pooling (RBAP), and multiple-way data generation (MWDG). The scalogram method is utilized to transform the sounds of coughing into a visual representation. Before beginning to train the model that has been developed, the SMOTE approach is used to calibrate the CXR and CT scans as well as the cough sound images (CSI) of nine different chest disorders. The CXR, CT scan, and CSI used for training and evaluating the proposed model come from 24 publicly available benchmark chest illness datasets. The classification performance of the proposed model is compared with that of seven baseline models, namely Vgg-19, ResNet-101, ResNet-50, DenseNet-121, EfficientNetB0, DenseNet-201, and Inception-V3, in addition to state-of-the-art (SOTA) classifiers. The effectiveness of the proposed model is further demonstrated by the results of the ablation experiments. The proposed model was successful in achieving an accuracy of 99.01%, making it superior to both the baseline models and the SOTA classifiers. As a result, the proposed approach is capable of offering significant support to radiologists and other medical professionals.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
3
|
Highly accurate multiclass classification of respiratory system diseases from chest radiography images using deep transfer learning technique. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
|
4
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|
5
|
Okimoto N, Yasaka K, Kaiume M, Kanemaru N, Suzuki Y, Abe O. Improving detection performance of hepatocellular carcinoma and interobserver agreement for liver imaging reporting and data system on CT using deep learning reconstruction. Abdom Radiol (NY) 2023; 48:1280-1289. [PMID: 36757454 PMCID: PMC10115733 DOI: 10.1007/s00261-023-03834-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/23/2023] [Accepted: 01/24/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE This study aimed to compare the hepatocellular carcinoma (HCC) detection performance, interobserver agreement for Liver Imaging Reporting and Data System (LI-RADS) categories, and image quality between deep learning reconstruction (DLR) and conventional hybrid iterative reconstruction (Hybrid IR) in CT. METHODS This retrospective study included patients who underwent abdominal dynamic contrast-enhanced CT between October 2021 and March 2022. Arterial, portal, and delayed phase images were reconstructed using DLR and Hybrid IR. Two blinded readers independently read the image sets with detecting HCCs, scoring LI-RADS, and evaluating image quality. RESULTS A total of 26 patients with HCC (mean age, 73 years ± 12.3) and 23 patients without HCC (mean age, 66 years ± 14.7) were included. The figures of merit (FOM) for the jackknife alternative free-response receiver operating characteristic analysis in detecting HCC averaged for the readers were 0.925 (reader 1, 0.937; reader 2, 0.913) in DLR and 0.878 (reader 1, 0.904; reader 2, 0.851) in Hybrid IR, and the FOM in DLR were significantly higher than that in Hybrid IR (p = 0.038). The interobserver agreement (Cohen's weighted kappa statistics) for LI-RADS categories was moderate for DLR (0.595; 95% CI, 0.585-0.605) and significantly superior to Hybrid IR (0.568; 95% CI, 0.553-0.582). According to both readers, DLR was significantly superior to Hybrid IR in terms of image quality (p ≤ 0.021). CONCLUSION DLR improved HCC detection, interobserver agreement for LI-RADS categories, and image quality in evaluations of HCC compared to Hybrid IR in abdominal dynamic contrast-enhanced CT.
Collapse
Affiliation(s)
- Naomasa Okimoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Masafumi Kaiume
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Noriko Kanemaru
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuichi Suzuki
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
6
|
Validation of a Deep Learning Model for Detecting Chest Pathologies from Digital Chest Radiographs. Diagnostics (Basel) 2023; 13:diagnostics13030557. [PMID: 36766661 PMCID: PMC9914339 DOI: 10.3390/diagnostics13030557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/24/2023] [Accepted: 01/30/2023] [Indexed: 02/05/2023] Open
Abstract
Purpose: Manual interpretation of chest radiographs is a challenging task and is prone to errors. An automated system capable of categorizing chest radiographs based on the pathologies identified could aid in the timely and efficient diagnosis of chest pathologies. Method: For this retrospective study, 4476 chest radiographs were collected between January and April 2021 from two tertiary care hospitals. Three expert radiologists established the ground truth, and all radiographs were analyzed using a deep-learning AI model to detect suspicious ROIs in the lungs, pleura, and cardiac regions. Three test readers (different from the radiologists who established the ground truth) independently reviewed all radiographs in two sessions (unaided and AI-aided mode) with a washout period of one month. Results: The model demonstrated an aggregate AUROC of 91.2% and a sensitivity of 88.4% in detecting suspicious ROIs in the lungs, pleura, and cardiac regions. These results outperform unaided human readers, who achieved an aggregate AUROC of 84.2% and sensitivity of 74.5% for the same task. When using AI, the aided readers obtained an aggregate AUROC of 87.9% and a sensitivity of 85.1%. The average time taken by the test readers to read a chest radiograph decreased by 21% (p < 0.01) when using AI. Conclusion: The model outperformed all three human readers and demonstrated high AUROC and sensitivity across two independent datasets. When compared to unaided interpretations, AI-aided interpretations were associated with significant improvements in reader performance and chest radiograph interpretation time.
Collapse
|
7
|
Irmici G, Cè M, Caloro E, Khenkina N, Della Pepa G, Ascenti V, Martinenghi C, Papa S, Oliva G, Cellina M. Chest X-ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics (Basel) 2023; 13:diagnostics13020216. [PMID: 36673027 PMCID: PMC9858224 DOI: 10.3390/diagnostics13020216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/28/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
Due to its widespread availability, low cost, feasibility at the patient's bedside and accessibility even in low-resource settings, chest X-ray is one of the most requested examinations in radiology departments. Whilst it provides essential information on thoracic pathology, it can be difficult to interpret and is prone to diagnostic errors, particularly in the emergency setting. The increasing availability of large chest X-ray datasets has allowed the development of reliable Artificial Intelligence (AI) tools to help radiologists in everyday clinical practice. AI integration into the diagnostic workflow would benefit patients, radiologists, and healthcare systems in terms of improved and standardized reporting accuracy, quicker diagnosis, more efficient management, and appropriateness of the therapy. This review article aims to provide an overview of the applications of AI for chest X-rays in the emergency setting, emphasizing the detection and evaluation of pneumothorax, pneumonia, heart failure, and pleural effusion.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natallia Khenkina
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Carlo Martinenghi
- Radiology Department, San Raffaele Hospital, Via Olgettina 60, 20132 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Giancarlo Oliva
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| |
Collapse
|
8
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
9
|
Zhang S, Tang T, Peng X, Zhang Y, Yang W, Li W, Xin X, Zhang J, Wang W, Zhang B. Automatic Localization and Identification of Thoracic Diseases from Chest X-rays with Deep Learning. Curr Med Imaging 2022; 18:1416-1425. [PMID: 35593336 DOI: 10.2174/1573405618666220518110113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 04/03/2022] [Accepted: 04/07/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND There are numerous difficulties in using deep learning to automatically locate and identify diseases in chest X-rays (CXR). The most prevailing two are the lack of labeled data of disease locations and poor model transferability between different datasets. This study aims to tackle these problems. METHODS We built a new form of bounding box dataset and developed a two-stage model for disease localization and identification of CXRs based on deep learning. The dataset marks anomalous regions in CXRs but not the corresponding diseases, different from all previous datasets. The advantages of this design are reduced labor of annotation and fewer possible errors associated with image labeling. The two-stage model combines the robustness of the region proposal network, feature pyramid network, and multi-instance learning techniques. We trained and validated our model with the new bounding box dataset and the CheXpert dataset. Then, we tested its classification and localization performance on an external dataset, which is the official split test set of ChestX-ray14. RESULTS For classification result, the mean area under the receiver operating characteristic curve (AUC) metrics of our model on the CheXpert validation dataset was 0.912, which was 0.021, superior to the baseline model. The mean AUC of our model on an external testing set was 0.784, whereas the state-of-the-art model got 0.773. The localization results showed comparable performance to the stateof- the-art models. CONCLUSION Our model exhibits a good transferability between datasets. The new bounding box dataset is proven to be useful and shows an alternative technique for compiling disease localization datasets.
Collapse
Affiliation(s)
- Shuai Zhang
- School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, China
| | - Tianyi Tang
- School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, China
| | - Xin Peng
- Department of Radiology, Drum Tower Hospital, Nanjing University, Nanjing, China
| | - Yanqiu Zhang
- Department of Radiology, Drum Tower Hospital, Nanjing University, Nanjing, China
| | - Wen Yang
- Department of Radiology, Drum Tower Hospital, Nanjing University, Nanjing, China
| | - Wenfei Li
- School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, China.,Institute for Brain Sciences, Nanjing University
| | - Xiaoyan Xin
- Department of Radiology, Drum Tower Hospital, Nanjing University, Nanjing, China
| | - Jian Zhang
- School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, China.,Institute for Brain Sciences, Nanjing University, Nanjing, China
| | - Wei Wang
- School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, China.,Institute for Brain Sciences, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Drum Tower Hospital, Nanjing University, Nanjing, China.,Institute for Brain Sciences, Nanjing University, Nanjing, China
| |
Collapse
|
10
|
Huang X, Li B, Huang T, Yuan S, Wu W, Yin H, Lyu J. External validation based on transfer learning for diagnosing atelectasis using portable chest X-rays. Front Med (Lausanne) 2022; 9:920040. [PMID: 35935769 PMCID: PMC9353169 DOI: 10.3389/fmed.2022.920040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Although there has been a large amount of research focusing on medical image classification, few studies have focused specifically on the portable chest X-ray. To determine the feasibility of transfer learning method for detecting atelectasis with portable chest X-ray and its application to external validation, based on the analysis of a large dataset. Methods From the intensive care chest X-ray medical information market (MIMIC-CXR) database, 14 categories were obtained using natural language processing tags, among which 45,808 frontal chest radiographs were labeled as “atelectasis,” and 75,455 chest radiographs labeled “no finding.” A total of 60,000 images were extracted, including positive images labeled “atelectasis” and positive X-ray images labeled “no finding.” The data were categorized into “normal” and “atelectasis,” which were evenly distributed and randomly divided into three cohorts (training, validation, and testing) at a ratio of about 8:1:1. This retrospective study extracted 300 X-ray images labeled “atelectasis” and “normal” from patients in ICUs of The First Affiliated Hospital of Jinan University, which was labeled as an external dataset for verification in this experiment. Data set performance was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive values derived from transfer learning training. Results It took 105 min and 6 s to train the internal training set. The AUC, sensitivity, specificity, and accuracy were 88.57, 75.10, 88.30, and 81.70%. Compared with the external validation set, the obtained AUC, sensitivity, specificity, and accuracy were 98.39, 70.70, 100, and 86.90%. Conclusion This study found that when detecting atelectasis, the model obtained by transfer training with sufficiently large data sets has excellent external verification and acculturate localization of lesions.
Collapse
Affiliation(s)
- Xiaxuan Huang
- Department of Neurology, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Baige Li
- Department of Radiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Tao Huang
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Shiqi Yuan
- Department of Neurology, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Wentao Wu
- School of Public Health, Xi’an Jiaotong University Health Science Center, Xi’an, China
| | - Haiyan Yin
- Department of Intensive Care Unit, The First Affiliated Hospital of Jinan University, Guangzhou, China
- *Correspondence: Haiyan Yin,
| | - Jun Lyu
- Department of Clinical Research, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Guangzhou, China
- Jun Lyu,
| |
Collapse
|
11
|
Ravi V, Acharya V, Alazab M. A multichannel EfficientNet deep learning-based stacking ensemble approach for lung disease detection using chest X-ray images. CLUSTER COMPUTING 2022; 26:1181-1203. [PMID: 35874187 PMCID: PMC9295885 DOI: 10.1007/s10586-022-03664-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 05/21/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
This paper proposes a multichannel deep learning approach for lung disease detection using chest X-rays. The multichannel models used in this work are EfficientNetB0, EfficientNetB1, and EfficientNetB2 pretrained models. The features from EfficientNet models are fused together. Next, the fused features are passed into more than one non-linear fully connected layer. Finally, the features passed into a stacked ensemble learning classifier for lung disease detection. The stacked ensemble learning classifier contains random forest and SVM in the first stage and logistic regression in the second stage for lung disease detection. The performance of the proposed method is studied in detail for more than one lung disease such as pneumonia, Tuberculosis (TB), and COVID-19. The performances of the proposed method for lung disease detection using chest X-rays compared with similar methods with the aim to show that the method is robust and has the capability to achieve better performances. In all the experiments on lung disease, the proposed method showed better performance and outperformed similar lung disease existing methods. This indicates that the proposed method is robust and generalizable on unseen chest X-rays data samples. To ensure that the features learnt by the proposed method is optimal, t-SNE feature visualization was shown on all three lung disease models. Overall, the proposed method has shown 98% detection accuracy for pediatric pneumonia lung disease, 99% detection accuracy for TB lung disease, and 98% detection accuracy for COVID-19 lung disease. The proposed method can be used as a tool for point-of-care diagnosis by healthcare radiologists.Journal instruction requires a city for affiliations; however, this is missing in affiliation 3. Please verify if the provided city is correct and amend if necessary.correct.
Collapse
Affiliation(s)
- Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Vasundhara Acharya
- Manipal Institute of Technology (MIT), Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Mamoun Alazab
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT Australia
| |
Collapse
|
12
|
Zhang R, Yang F, Luo Y, Liu J, Wang C. Learning Invariant Representation for Unsupervised Domain Adaptive Thorax Disease Classification. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
13
|
Deep Learning in Multi-Class Lung Diseases’ Classification on Chest X-ray Images. Diagnostics (Basel) 2022; 12:diagnostics12040915. [PMID: 35453963 PMCID: PMC9025806 DOI: 10.3390/diagnostics12040915] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/28/2022] [Accepted: 04/04/2022] [Indexed: 12/04/2022] Open
Abstract
Chest X-ray radiographic (CXR) imagery enables earlier and easier lung disease diagnosis. Therefore, in this paper, we propose a deep learning method using a transfer learning technique to classify lung diseases on CXR images to improve the efficiency and accuracy of computer-aided diagnostic systems’ (CADs’) diagnostic performance. Our proposed method is a one-step, end-to-end learning, which means that raw CXR images are directly inputted into a deep learning model (EfficientNet v2-M) to extract their meaningful features in identifying disease categories. We experimented using our proposed method on three classes of normal, pneumonia, and pneumothorax of the U.S. National Institutes of Health (NIH) data set, and achieved validation performances of loss = 0.6933, accuracy = 82.15%, sensitivity = 81.40%, and specificity = 91.65%. We also experimented on the Cheonan Soonchunhyang University Hospital (SCH) data set on four classes of normal, pneumonia, pneumothorax, and tuberculosis, and achieved validation performances of loss = 0.7658, accuracy = 82.20%, sensitivity = 81.40%, and specificity = 94.48%; testing accuracy of normal, pneumonia, pneumothorax, and tuberculosis classes was 63.60%, 82.30%, 82.80%, and 89.90%, respectively.
Collapse
|
14
|
Li Y, Zaheri S, Nguyen K, Liu L, Hassanipour F, Pace BS, Bleris L. Machine learning-based approaches for identifying human blood cells harboring CRISPR-mediated fetal chromatin domain ablations. Sci Rep 2022; 12:1481. [PMID: 35087158 PMCID: PMC8795181 DOI: 10.1038/s41598-022-05575-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/17/2021] [Indexed: 11/08/2022] Open
Abstract
Two common hemoglobinopathies, sickle cell disease (SCD) and β-thalassemia, arise from genetic mutations within the β-globin gene. In this work, we identified a 500-bp motif (Fetal Chromatin Domain, FCD) upstream of human ϒ-globin locus and showed that the removal of this motif using CRISPR technology reactivates the expression of ϒ-globin. Next, we present two different cell morphology-based machine learning approaches that can be used identify human blood cells (KU-812) that harbor CRISPR-mediated FCD genetic modifications. Three candidate models from the first approach, which uses multilayer perceptron algorithm (MLP 20-26, MLP26-18, and MLP 30-26) and flow cytometry-derived cellular data, yielded 0.83 precision, 0.80 recall, 0.82 accuracy, and 0.90 area under the ROC (receiver operating characteristic) curve when predicting the edited cells. In comparison, the candidate model from the second approach, which uses deep learning (T2D5) and DIC microscopy-derived imaging data, performed with less accuracy (0.80) and ROC AUC (0.87). We envision that equivalent machine learning-based models can complement currently available genotyping protocols for specific genetic modifications which result in morphological changes in human cells.
Collapse
Affiliation(s)
- Yi Li
- Bioengineering Department, The University of Texas at Dallas, Richardson, TX, USA.
- Center for Systems Biology, The University of Texas at Dallas, Richardson, TX, USA.
| | - Shadi Zaheri
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX, USA
| | - Khai Nguyen
- Bioengineering Department, The University of Texas at Dallas, Richardson, TX, USA
- Center for Systems Biology, The University of Texas at Dallas, Richardson, TX, USA
| | - Li Liu
- Department of Biological Sciences, University of Texas at Dallas, Richardson, TX, USA
| | - Fatemeh Hassanipour
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX, USA
| | - Betty S Pace
- Department of Pediatrics, Augusta University, Augusta, GA, USA
| | - Leonidas Bleris
- Bioengineering Department, The University of Texas at Dallas, Richardson, TX, USA.
- Center for Systems Biology, The University of Texas at Dallas, Richardson, TX, USA.
- Department of Biological Sciences, University of Texas at Dallas, Richardson, TX, USA.
| |
Collapse
|