1
|
Farhat MR, Jacobson KR. For Tuberculosis, Not "To Screen or Not to Screen?" but "Who?" and "How?". Clin Infect Dis 2024; 78:1677-1679. [PMID: 38636953 PMCID: PMC11175681 DOI: 10.1093/cid/ciae058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Indexed: 04/20/2024] Open
Abstract
Active case finding leveraging new molecular diagnostics and chest X-rays with automated interpretation algorithms is increasingly being developed for high-risk populations to drive down tuberculosis incidence. We consider why such an approach did not deliver a decline in tuberculosis prevalence in Brazilian prison populations and what to consider next.
Collapse
Affiliation(s)
- Maha Reda Farhat
- Department of Biomedical Informatics, Harvard Medical School
- Pulmonary and Critical Care Medicine, Massachusetts General Hospital
| | - Karen Rita Jacobson
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center and Boston University Chobanian and Avedisian School of Medicine, Boston, Massachusetts
| |
Collapse
|
2
|
Guo L, Zhou C, Xu J, Huang C, Yu Y, Lu G. Deep Learning for Chest X-ray Diagnosis: Competition Between Radiologists with or Without Artificial Intelligence Assistance. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:922-934. [PMID: 38332402 PMCID: PMC11169143 DOI: 10.1007/s10278-024-00990-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 02/10/2024]
Abstract
This study aimed to assess the performance of a deep learning algorithm in helping radiologist achieve improved efficiency and accuracy in chest radiograph diagnosis. We adopted a deep learning algorithm to concurrently detect the presence of normal findings and 13 different abnormalities in chest radiographs and evaluated its performance in assisting radiologists. Each competing radiologist had to determine the presence or absence of these signs based on the label provided by the AI. The 100 radiographs were randomly divided into two sets for evaluation: one without AI assistance (control group) and one with AI assistance (test group). The accuracy, false-positive rate, false-negative rate, and analysis time of 111 radiologists (29 senior, 32 intermediate, and 50 junior) were evaluated. A radiologist was given an initial score of 14 points for each image read, with 1 point deducted for an incorrect answer and 0 points given for a correct answer. The final score for each doctor was automatically calculated by the backend calculator. We calculated the mean scores of each radiologist in the two groups (the control group and the test group) and calculated the mean scores to evaluate the performance of the radiologists with and without AI assistance. The average score of the 111 radiologists was 597 (587-605) in the control group and 619 (612-626) in the test group (P < 0.001). The time spent by the 111 radiologists on the control and test groups was 3279 (2972-3941) and 1926 (1710-2432) s, respectively (P < 0.001). The performance of the 111 radiologists in the two groups was evaluated by the area under the receiver operating characteristic curve (AUC). The radiologists showed better performance on the test group of radiographs in terms of normal findings, pulmonary fibrosis, heart shadow enlargement, mass, pleural effusion, and pulmonary consolidation recognition, with AUCs of 1.0, 0.950, 0.991, 1.0, 0.993, and 0.982, respectively. The radiologists alone showed better performance in aortic calcification (0.993), calcification (0.933), cavity (0.963), nodule (0.923), pleural thickening (0.957), and rib fracture (0.987) recognition. This competition verified the positive effects of deep learning methods in assisting radiologists in interpreting chest X-rays. AI assistance can help to improve both the efficacy and efficiency of radiologists.
Collapse
Affiliation(s)
- Lili Guo
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huai'an, 223300, China.
| | - Changsheng Zhou
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
| |
Collapse
|
3
|
Liang S, Xu X, Yang Z, Du Q, Zhou L, Shao J, Guo J, Ying B, Li W, Wang C. Deep learning for precise diagnosis and subtype triage of drug-resistant tuberculosis on chest computed tomography. MedComm (Beijing) 2024; 5:e487. [PMID: 38469547 PMCID: PMC10925488 DOI: 10.1002/mco2.487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 03/13/2024] Open
Abstract
Deep learning, transforming input data into target prediction through intricate network structures, has inspired novel exploration in automated diagnosis based on medical images. The distinct morphological characteristics of chest abnormalities between drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) on chest computed tomography (CT) are of potential value in differential diagnosis, which is challenging in the clinic. Hence, based on 1176 chest CT volumes from the equal number of patients with tuberculosis (TB), we presented a Deep learning-based system for TB drug resistance identification and subtype classification (DeepTB), which could automatically diagnose DR-TB and classify crucial subtypes, including rifampicin-resistant tuberculosis, multidrug-resistant tuberculosis, and extensively drug-resistant tuberculosis. Moreover, chest lesions were manually annotated to endow the model with robust power to assist radiologists in image interpretation and the Circos revealed the relationship between chest abnormalities and specific types of DR-TB. Finally, DeepTB achieved an area under the curve (AUC) up to 0.930 for thoracic abnormality detection and 0.943 for DR-TB diagnosis. Notably, the system demonstrated instructive value in DR-TB subtype classification with AUCs ranging from 0.880 to 0.928. Meanwhile, class activation maps were generated to express a human-understandable visual concept. Together, showing a prominent performance, DeepTB would be impactful in clinical decision-making for DR-TB.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Xiuyuan Xu
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Zhe Yang
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Qiuyu Du
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Lingyu Zhou
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Jun Shao
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Jixiang Guo
- Machine Intelligence LaboratoryCollege of Computer ScienceSichuan UniversityChengduChina
| | - Binwu Ying
- Department of Laboratory MedicineWest China Hospital, Sichuan UniversityChengduChina
| | - Weimin Li
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| | - Chengdi Wang
- Department of Pulmonary and Critical Care MedicineState Key Laboratory of Respiratory Health and Multimorbidity, Targeted Tracer Research and Development Laboratory, Med‐X Center for Manufacturing, Frontiers Science Center for Disease‐related Molecular Network, West China Hospital, West China School of Medicine, Sichuan UniversityChengduChina
| |
Collapse
|
4
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
5
|
Weng WH, Sellergen A, Kiraly AP, D'Amour A, Park J, Pilgrim R, Pfohl S, Lau C, Natarajan V, Azizi S, Karthikesalingam A, Cole-Lewis H, Matias Y, Corrado GS, Webster DR, Shetty S, Prabhakara S, Eswaran K, Celi LAG, Liu Y. An intentional approach to managing bias in general purpose embedding models. Lancet Digit Health 2024; 6:e126-e130. [PMID: 38278614 DOI: 10.1016/s2589-7500(23)00227-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 10/24/2023] [Accepted: 11/02/2023] [Indexed: 01/28/2024]
Abstract
Advances in machine learning for health care have brought concerns about bias from the research community; specifically, the introduction, perpetuation, or exacerbation of care disparities. Reinforcing these concerns is the finding that medical images often reveal signals about sensitive attributes in ways that are hard to pinpoint by both algorithms and people. This finding raises a question about how to best design general purpose pretrained embeddings (GPPEs, defined as embeddings meant to support a broad array of use cases) for building downstream models that are free from particular types of bias. The downstream model should be carefully evaluated for bias, and audited and improved as appropriate. However, in our view, well intentioned attempts to prevent the upstream components-GPPEs-from learning sensitive attributes can have unintended consequences on the downstream models. Despite producing a veneer of technical neutrality, the resultant end-to-end system might still be biased or poorly performing. We present reasons, by building on previously published data, to support the reasoning that GPPEs should ideally contain as much information as the original data contain, and highlight the perils of trying to remove sensitive attributes from a GPPE. We also emphasise that downstream prediction models trained for specific tasks and settings, whether developed using GPPEs or not, should be carefully designed and evaluated to avoid bias that makes models vulnerable to issues such as distributional shift. These evaluations should be done by a diverse team, including social scientists, on a diverse cohort representing the full breadth of the patient population for which the final model is intended.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Leo A G Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Yun Liu
- Google, Mountain View, CA, USA.
| |
Collapse
|
6
|
Hoffer O, Brzezinski RY, Ganim A, Shalom P, Ovadia-Blechman Z, Ben-Baruch L, Lewis N, Peled R, Shimon C, Naftali-Shani N, Katz E, Zimmer Y, Rabin N. Smartphone-based detection of COVID-19 and associated pneumonia using thermal imaging and a transfer learning algorithm. JOURNAL OF BIOPHOTONICS 2024:e202300486. [PMID: 38253344 DOI: 10.1002/jbio.202300486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/28/2023] [Accepted: 12/31/2023] [Indexed: 01/24/2024]
Abstract
COVID-19-related pneumonia is typically diagnosed using chest x-ray or computed tomography images. However, these techniques can only be used in hospitals. In contrast, thermal cameras are portable, inexpensive devices that can be connected to smartphones. Thus, they can be used to detect and monitor medical conditions outside hospitals. Herein, a smartphone-based application using thermal images of a human back was developed for COVID-19 detection. Image analysis using a deep learning algorithm revealed a sensitivity and specificity of 88.7% and 92.3%, respectively. The findings support the future use of noninvasive thermal imaging in primary screening for COVID-19 and associated pneumonia.
Collapse
Affiliation(s)
- Oshrit Hoffer
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Rafael Y Brzezinski
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
- Internal Medicine "C" and "E", Tel Aviv Medical Center, Tel Aviv, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Adam Ganim
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Perry Shalom
- School of Software Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Zehava Ovadia-Blechman
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Lital Ben-Baruch
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nir Lewis
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Racheli Peled
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Carmi Shimon
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nili Naftali-Shani
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Eyal Katz
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Yair Zimmer
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Neta Rabin
- Department of Industrial Engineering, Tel-Aviv University, Tel Aviv, Israel
| |
Collapse
|
7
|
Dohál M, Porvazník I, Solovič I, Mokrý J. Advancing tuberculosis management: the role of predictive, preventive, and personalized medicine. Front Microbiol 2023; 14:1225438. [PMID: 37860132 PMCID: PMC10582268 DOI: 10.3389/fmicb.2023.1225438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/22/2023] [Indexed: 10/21/2023] Open
Abstract
Tuberculosis is a major global health issue, with approximately 10 million people falling ill and 1.4 million dying yearly. One of the most significant challenges to public health is the emergence of drug-resistant tuberculosis. For the last half-century, treating tuberculosis has adhered to a uniform management strategy in most patients. However, treatment ineffectiveness in some individuals with pulmonary tuberculosis presents a major challenge to the global tuberculosis control initiative. Unfavorable outcomes of tuberculosis treatment (including mortality, treatment failure, loss of follow-up, and unevaluated cases) may result in increased transmission of tuberculosis and the emergence of drug-resistant strains. Treatment failure may occur due to drug-resistant strains, non-adherence to medication, inadequate absorption of drugs, or low-quality healthcare. Identifying the underlying cause and adjusting the treatment accordingly to address treatment failure is important. This is where approaches such as artificial intelligence, genetic screening, and whole genome sequencing can play a critical role. In this review, we suggest a set of particular clinical applications of these approaches, which might have the potential to influence decisions regarding the clinical management of tuberculosis patients.
Collapse
Affiliation(s)
- Matúš Dohál
- Biomedical Centre Martin, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Martin, Slovakia
| | - Igor Porvazník
- National Institute of Tuberculosis, Lung Diseases and Thoracic Surgery, Vyšné Hágy, Slovakia
- Faculty of Health, Catholic University in Ružomberok, Ružomberok, Slovakia
| | - Ivan Solovič
- National Institute of Tuberculosis, Lung Diseases and Thoracic Surgery, Vyšné Hágy, Slovakia
- Faculty of Health, Catholic University in Ružomberok, Ružomberok, Slovakia
| | - Juraj Mokrý
- Department of Pharmacology, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Martin, Slovakia
| |
Collapse
|
8
|
Park HJ, Kim SH, Choi JY, Cha D. Human-machine cooperation meta-model for clinical diagnosis by adaptation to human expert's diagnostic characteristics. Sci Rep 2023; 13:16204. [PMID: 37758800 PMCID: PMC10533492 DOI: 10.1038/s41598-023-43291-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 09/21/2023] [Indexed: 09/29/2023] Open
Abstract
Artificial intelligence (AI) using deep learning approaches the capabilities of human experts in medical image diagnosis. However, due to liability issues in medical decisions, AI is often relegated to an assistant role. Based on this responsibility constraint, the effective use of AI to assist human intelligence in real-world clinics remains a challenge. Given the significant inter-individual variations in clinical decisions among physicians based on their expertise, AI needs to adapt to individual experts, complementing weaknesses and enhancing strengths. For this adaptation, AI should not only acquire domain knowledge but also understand the specific human experts it assists. This study introduces a meta-model for human-machine cooperation that first evaluates each expert's class-specific diagnostic tendencies using conditional probability, based on which the meta-model adjusts the AI's predictions. This meta-model was applied to ear disease diagnosis using otoendoscopy, highlighting improved performance when incorporating individual diagnostic characteristics, even with limited evaluation data. The highest accuracy was achieved by combining each expert's conditional probabilities with machine classification probability, using optimal weights specific to each individual's overall classification accuracy. This tailored model aims to mitigate potential misjudgments due to psychological effects caused by machine suggestions and to capitalize on the unique expertise of individual clinicians.
Collapse
Affiliation(s)
- Hae-Jeong Park
- Department of Nuclear Medicine, Department of Psychiatry, Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, South Korea.
- Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea.
- Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, 50-1, Yonsei-ro, Sinchon-dong, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Sung Huhn Kim
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Jae Young Choi
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Dongchul Cha
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea.
- Center for Innovative Medicine, Healthcare Lab, NAVER Corporation, 95, Jeongjail-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13561, Republic of Korea.
- Healthcare Lab, Naver Cloud Corporation, Seongnam-si, Republic of Korea.
| |
Collapse
|
9
|
Devasia J, Goswami H, Lakshminarayanan S, Rajaram M, Adithan S. Observer Performance Evaluation of a Deep Learning Model for Multilabel Classification of Active Tuberculosis Lung Zone-Wise Manifestations. Cureus 2023; 15:e44954. [PMID: 37818499 PMCID: PMC10561790 DOI: 10.7759/cureus.44954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2023] [Indexed: 10/12/2023] Open
Abstract
Background Chest X-rays (CXRs) are widely used for cost-effective screening of active pulmonary tuberculosis despite their limitations in sensitivity and specificity when interpreted by clinicians or radiologists. To address this issue, computer-aided detection (CAD) algorithms, particularly deep learning architectures based on convolution, have been developed to automate the analysis of radiography imaging. Deep learning algorithms have shown promise in accurately classifying lung abnormalities using chest X-ray images. In this study, we utilized the EfficientNet B4 model, which was pre-trained on ImageNet with 380x380 input dimensions, using its weights for transfer learning, and was modified with a series of components including global average pooling, batch normalization, dropout, and a classifier with 12 image-wise and 44 segment-wise lung zone evaluation classes using sigmoid activation. Objectives Assess the clinical usefulness of our previously created EfficientNet B4 model in identifying lung zone-specific abnormalities related to active tuberculosis through an observer performance test involving a skilled clinician operating in tuberculosis-specific environments. Methods The ground truth was established by a radiologist who examined all sample CXRs to identify lung zone-wise abnormalities. An expert clinician working in tuberculosis-specific settings independently reviewed the same CXR with blinded access to the ground truth. Simultaneously, the CXRs were classified using the EfficientNet B4 model. The clinician's assessments were then compared with the model's predictions, and the agreement between the two was measured using the kappa coefficient, evaluating the model's performance in classifying active tuberculosis manifestations across lung zones. Results The results show a strong agreement (Kappa ≥0.81) seen for lung zone-wise abnormalities of pneumothorax, mediastinal shift, emphysema, fibrosis, calcifications, pleural effusion, and cavity. Substantial agreement (Kappa = 0.61-0.80) for cavity, mediastinal shift, volume loss, and collapsed lungs. The Kappa score for lung zone-wise abnormalities is moderate (0.41-0.60) for 39% of cases. In image-wise agreement, the EfficientNet B4 model's performance ranges from moderate to almost perfect across categories, while in lung zone-wise agreement, it varies from fair to almost perfect. The results show strong agreement between the EfficientNet B4 model and the human reader in detecting lung zone-wise and image-wise manifestations. Conclusion The clinical utility of the EfficientNet B4 models to detect the abnormalities can aid clinicians in primary care settings for screening and triaging tuberculosis where resources are constrained or overburdened.
Collapse
Affiliation(s)
- James Devasia
- Preventive Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | | | - Subitha Lakshminarayanan
- Preventive Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | - Manju Rajaram
- Pulmonary Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | - Subathra Adithan
- Radiodiagnosis, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| |
Collapse
|
10
|
Huang C, Wang W, Zhang X, Wang SH, Zhang YD. Tuberculosis Diagnosis Using Deep Transferred EfficientNet. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2639-2646. [PMID: 35976826 DOI: 10.1109/tcbb.2022.3199572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tuberculosis is a very deadly disease, with more than half of all tuberculosis cases dead in countries and regions with relatively poor health care resources. Fortunately, the disease is curable, and early diagnosis and medication can go a long way toward curing TB patients. Unfortunately, traditional methods of TB diagnosis rely on specialist doctors, which is lacking in areas with high TB mortality rates. Diagnostic methods based on artificial intelligence technology are one of the solutions to this problem. We propose a Deep Transferred EfficientNet with SVM (DTE-SVM), which replaces the pre-trained EfficientNet classification layer with an SVM classifier and achieves auspicious performance on a small dataset. After ten runs of 10-fold Cross-Validation, the DTE-SVM has a sensitivity of 93.89±1.96, a specificity of 95.35±1.31, a precision of 95.30±1.24, an accuracy of 94.62±1.00, and an F1-score of 94.62±1.00. In addition, our study conducted ablation studies on the effect of the SVM classifier on model performance and briefly discussed the results.
Collapse
|
11
|
Rajpurkar P, Lungren MP. The Current and Future State of AI Interpretation of Medical Images. N Engl J Med 2023; 388:1981-1990. [PMID: 37224199 DOI: 10.1056/nejmra2301725] [Citation(s) in RCA: 80] [Impact Index Per Article: 80.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Affiliation(s)
- Pranav Rajpurkar
- From the Department of Biomedical Informatics, Harvard Medical School, Boston (P.R.); the Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, and the Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco - both in California (M.P.L.); and Microsoft, Redmond, Washington (M.P.L.)
| | - Matthew P Lungren
- From the Department of Biomedical Informatics, Harvard Medical School, Boston (P.R.); the Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, and the Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco - both in California (M.P.L.); and Microsoft, Redmond, Washington (M.P.L.)
| |
Collapse
|
12
|
Farzaneh N, Ansari S, Lee E, Ward KR, Sjoding MW. Collaborative strategies for deploying artificial intelligence to complement physician diagnoses of acute respiratory distress syndrome. NPJ Digit Med 2023; 6:62. [PMID: 37031252 PMCID: PMC10082784 DOI: 10.1038/s41746-023-00797-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 03/10/2023] [Indexed: 04/10/2023] Open
Abstract
There is a growing gap between studies describing the capabilities of artificial intelligence (AI) diagnostic systems using deep learning versus efforts to investigate how or when to integrate AI systems into a real-world clinical practice to support physicians and improve diagnosis. To address this gap, we investigate four potential strategies for AI model deployment and physician collaboration to determine their potential impact on diagnostic accuracy. As a case study, we examine an AI model trained to identify findings of the acute respiratory distress syndrome (ARDS) on chest X-ray images. While this model outperforms physicians at identifying findings of ARDS, there are several reasons why fully automated ARDS detection may not be optimal nor feasible in practice. Among several collaboration strategies tested, we find that if the AI model first reviews the chest X-ray and defers to a physician if it is uncertain, this strategy achieves a higher diagnostic accuracy (0.869, 95% CI 0.835-0.903) compared to a strategy where a physician reviews a chest X-ray first and defers to an AI model if uncertain (0.824, 95% CI 0.781-0.862), or strategies where the physician reviews the chest X-ray alone (0.808, 95% CI 0.767-0.85) or the AI model reviews the chest X-ray alone (0.847, 95% CI 0.806-0.887). If the AI model reviews a chest X-ray first, this allows the AI system to make decisions for up to 79% of cases, letting physicians focus on the most challenging subsets of chest X-rays.
Collapse
Affiliation(s)
- Negar Farzaneh
- The Max Harry Weil Institute for Critical Care Research & Innovation, University of Michigan, Ann Arbor, MI, USA.
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA.
| | - Sardar Ansari
- The Max Harry Weil Institute for Critical Care Research & Innovation, University of Michigan, Ann Arbor, MI, USA
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Elizabeth Lee
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Kevin R Ward
- The Max Harry Weil Institute for Critical Care Research & Innovation, University of Michigan, Ann Arbor, MI, USA
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Michael W Sjoding
- The Max Harry Weil Institute for Critical Care Research & Innovation, University of Michigan, Ann Arbor, MI, USA
- Department of Internal Medicine, Division of Pulmonary and Critical Care, University of Michigan Medical School, Ann Arbor, MI, USA
| |
Collapse
|
13
|
Plesner LL, Müller FC, Nybing JD, Laustrup LC, Rasmussen F, Nielsen OW, Boesen M, Andersen MB. Autonomous Chest Radiograph Reporting Using AI: Estimation of Clinical Impact. Radiology 2023; 307:e222268. [PMID: 36880947 DOI: 10.1148/radiol.222268] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
Background Automated interpretation of normal chest radiographs could alleviate the workload of radiologists. However, the performance of such an artificial intelligence (AI) tool compared with clinical radiology reports has not been established. Purpose To perform an external evaluation of a commercially available AI tool for (a) the number of chest radiographs autonomously reported, (b) the sensitivity for AI detection of abnormal chest radiographs, and (c) the performance of AI compared with that of the clinical radiology reports. Materials and Methods In this retrospective study, consecutive posteroanterior chest radiographs from adult patients in four hospitals in the capital region of Denmark were obtained in January 2020, including images from emergency department patients, in-hospital patients, and outpatients. Three thoracic radiologists labeled chest radiographs in a reference standard based on chest radiograph findings into the following categories: critical, other remarkable, unremarkable, or normal (no abnormalities). AI classified chest radiographs as high confidence normal (normal) or not high confidence normal (abnormal). Results A total of 1529 patients were included for analysis (median age, 69 years [IQR, 55-69 years]; 776 women), with 1100 (72%) classified by the reference standard as having abnormal radiographs, 617 (40%) as having critical abnormal radiographs, and 429 (28%) as having normal radiographs. For comparison, clinical radiology reports were classified based on the text and insufficient reports excluded (n = 22). The sensitivity of AI was 99.1% (95% CI: 98.3, 99.6; 1090 of 1100 patients) for abnormal radiographs and 99.8% (95% CI: 99.1, 99.9; 616 of 617 patients) for critical radiographs. Corresponding sensitivities for radiologist reports were 72.3% (95% CI: 69.5, 74.9; 779 of 1078 patients) and 93.5% (95% CI: 91.2, 95.3; 558 of 597 patients), respectively. Specificity of AI, and hence the potential autonomous reporting rate, was 28.0% of all normal posteroanterior chest radiographs (95% CI: 23.8, 32.5; 120 of 429 patients), or 7.8% (120 of 1529 patients) of all posteroanterior chest radiographs. Conclusion Of all normal posteroanterior chest radiographs, 28% were autonomously reported by AI with a sensitivity for any abnormalities higher than 99%. This corresponded to 7.8% of the entire posteroanterior chest radiograph production. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Park in this issue.
Collapse
Affiliation(s)
- Louis L Plesner
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Felix C Müller
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Janus D Nybing
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Lene C Laustrup
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Finn Rasmussen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Olav W Nielsen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Mikael Boesen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Michael B Andersen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib Juuls vej 1, 2730 Herlev, Copenhagen, Denmark (L.L.P., F.C.M., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., O.W.N., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Capital region of Denmark (L.L.P., F.C.M., J.D.N., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (J.D.N., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| |
Collapse
|
14
|
Distinguishing nontuberculous mycobacterial lung disease and Mycobacterium tuberculosis lung disease on X-ray images using deep transfer learning. BMC Infect Dis 2023; 23:32. [PMID: 36658559 PMCID: PMC9854086 DOI: 10.1186/s12879-023-07996-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/09/2023] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Nontuberculous mycobacterial lung disease (NTM-LD) and Mycobacterium tuberculosis lung disease (MTB-LD) have similar clinical characteristics. Therefore, NTM-LD is sometimes incorrectly diagnosed with MTB-LD and treated incorrectly. To solve these difficulties, we aimed to distinguish the two diseases in chest X-ray images using deep learning technology, which has been used in various fields recently. METHODS We retrospectively collected chest X-ray images from 3314 patients infected with Mycobacterium tuberculosis (MTB) or nontuberculosis mycobacterium (NTM). After selecting the data according to the diagnostic criteria, various experiments were conducted to create the optimal deep learning model. A performance comparison was performed with the radiologist. Additionally, the model performance was verified using newly collected MTB-LD and NTM-LD patient data. RESULTS Among the implemented deep learning models, the ensemble model combining EfficientNet B4 and ResNet 50 performed the best in the test data. Also, the ensemble model outperformed the radiologist on all evaluation metrics. In addition, the accuracy of the ensemble model was 0.85 for MTB-LD and 0.78 for NTM-LD on an additional validation dataset consisting of newly collected patients. CONCLUSIONS In previous studies, it was known that it was difficult to distinguish between MTB-LD and NTM-LD in chest X-ray images, but we have successfully distinguished the two diseases using deep learning methods. This study has the potential to aid clinical decisions if the two diseases need to be differentiated.
Collapse
|
15
|
Deep learning classification of active tuberculosis lung zones wise manifestations using chest X-rays: a multi label approach. Sci Rep 2023; 13:887. [PMID: 36650270 PMCID: PMC9845381 DOI: 10.1038/s41598-023-28079-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 01/12/2023] [Indexed: 01/19/2023] Open
Abstract
Chest X-rays are the most economically viable diagnostic imaging test for active pulmonary tuberculosis screening despite the high sensitivity and low specificity when interpreted by clinicians or radiologists. Computer aided detection (CAD) algorithms, especially convolution based deep learning architecture, have been proposed to facilitate the automation of radiography imaging modalities. Deep learning algorithms have found success in classifying various abnormalities in lung using chest X-ray. We fine-tuned, validated and tested EfficientNetB4 architecture and utilized the transfer learning methodology for multilabel approach to detect lung zone wise and image wise manifestations of active pulmonary tuberculosis using chest X-ray. We used Area Under Receiver Operating Characteristic (AUC), sensitivity and specificity along with 95% confidence interval as model evaluation metrics. We also utilized the visualisation capabilities of convolutional neural networks (CNN), Gradient-weighted Class Activation Mapping (Grad-CAM) as post-hoc attention method to investigate the model and visualisation of Tuberculosis abnormalities and discuss them from radiological perspectives. EfficientNetB4 trained network achieved remarkable AUC, sensitivity and specificity of various pulmonary tuberculosis manifestations in intramural test set and external test set from different geographical region. The grad-CAM visualisations and their ability to localize the abnormalities can aid the clinicians at primary care settings for screening and triaging of tuberculosis where resources are constrained or overburdened.
Collapse
|
16
|
Akhter Y, Singh R, Vatsa M. AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
17
|
Han D, Chen Y, Li X, Li W, Zhang X, He T, Yu Y, Dou Y, Duan H, Yu N. Development and validation of a 3D-convolutional neural network model based on chest CT for differentiating active pulmonary tuberculosis from community-acquired pneumonia. LA RADIOLOGIA MEDICA 2023; 128:68-80. [PMID: 36574111 PMCID: PMC9793822 DOI: 10.1007/s11547-022-01580-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 12/13/2022] [Indexed: 12/29/2022]
Abstract
PURPOSE To develop and validate a 3D-convolutional neural network (3D-CNN) model based on chest CT for differentiating active pulmonary tuberculosis (APTB) from community-acquired pneumonia (CAP). MATERIALS AND METHODS Chest CT images of APTB and CAP patients diagnosed in two imaging centers (n = 432 in center A and n = 61 in center B) were collected retrospectively. The data in center A were divided into training, validation and internal test sets, and the data in center B were used as an external test set. A 3D-CNN was built using Keras deep learning framework. After the training, the 3D-CNN selected the model with the highest accuracy in the validation set as the optimal model, which was applied to the two test sets in centers A and B. In addition, the two test sets were independently diagnosed by two radiologists. The 3D-CNN optimal model was compared with the discrimination, calibration and net benefit of the two radiologists in differentiating APTB from CAP using chest CT images. RESULTS The accuracy of the 3D-CNN optimal model was 0.989 and 0.934 with the internal and external test set, respectively. The area-under-the-curve values with the 3D-CNN model in the two test sets were statistically higher than that of the two radiologists (all P < 0.05), and there was a high calibration degree. The decision curve analysis showed that the 3D-CNN optimal model had significantly higher net benefit for patients than the two radiologists. CONCLUSIONS 3D-CNN has high classification performance in differentiating APTB from CAP using chest CT images. The application of 3D-CNN provides a new automatic and rapid diagnosis method for identifying patients with APTB from CAP using chest CT images.
Collapse
Affiliation(s)
- Dong Han
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000 China
| | - Yibing Chen
- School of Information Science & Technology, Northwest University, Xi’an, 710127 Shaanxi China
| | - Xuechao Li
- Clinical Research Center, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, 712000 China
| | - Wen Li
- Department of Radiology, Baoji Central Hospital, Baoji, 721008 China
| | - Xirong Zhang
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000 China ,College of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, 712000 China
| | - Taiping He
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000 China ,College of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, 712000 China
| | - Yong Yu
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000 China ,College of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, 712000 China
| | - Yuequn Dou
- Respiratory Department, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, 712000 China
| | - Haifeng Duan
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000 China
| | - Nan Yu
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang West Rd, Xianyang, 712000, China.
| |
Collapse
|
18
|
Diagnostic Accuracy of the Artificial Intelligence Methods in Medical Imaging for Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis. J Clin Med 2022; 12:jcm12010303. [PMID: 36615102 PMCID: PMC9820940 DOI: 10.3390/jcm12010303] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/03/2023] Open
Abstract
Tuberculosis (TB) remains one of the leading causes of death among infectious diseases worldwide. Early screening and diagnosis of pulmonary tuberculosis (PTB) is crucial in TB control, and tend to benefit from artificial intelligence. Here, we aimed to evaluate the diagnostic efficacy of a variety of artificial intelligence methods in medical imaging for PTB. We searched MEDLINE and Embase with the OVID platform to identify trials published update to November 2022 that evaluated the effectiveness of artificial-intelligence-based software in medical imaging of patients with PTB. After data extraction, the quality of studies was assessed using quality assessment of diagnostic accuracy studies 2 (QUADAS-2). Pooled sensitivity and specificity were estimated using a bivariate random-effects model. In total, 3987 references were initially identified and 61 studies were finally included, covering a wide range of 124,959 individuals. The pooled sensitivity and the specificity were 91% (95% confidence interval (CI), 89-93%) and 65% (54-75%), respectively, in clinical trials, and 94% (89-96%) and 95% (91-97%), respectively, in model-development studies. These findings have demonstrated that artificial-intelligence-based software could serve as an accurate tool to diagnose PTB in medical imaging. However, standardized reporting guidance regarding AI-specific trials and multicenter clinical trials is urgently needed to truly transform this cutting-edge technology into clinical practice.
Collapse
|
19
|
Margineanu I, Louka C, Akkerman O, Stienstra Y, Alffenaar JW. eHealth in TB clinical management. Int J Tuberc Lung Dis 2022; 26:1151-1161. [PMID: 36447317 PMCID: PMC9728950 DOI: 10.5588/ijtld.21.0602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND: The constant expansion of internet and mobile technologies has created new opportunities in the field of eHealth, or the digital delivery of healthcare services. This TB meta-analysis aims to examine eHealth and its impact on TB clinical management in order to formulate recommendations for further development.METHODS: A systematic search was performed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework in PubMed and Embase of articles published up to April 2021. Screening, extraction and quality assessment were performed by two independent researchers. Studies evaluating an internet and/or mobile-based eHealth intervention with an impact on TB clinical management were included. Outcomes were organised following the five domains described in the WHO "Recommendations on Digital Interventions for Health System Strengthening" guideline.RESULTS: Search strategy yielded 3,873 studies, and 89 full texts were finally included. eHealth tended to enhance screening, diagnosis and treatment indicators, while being cost-effective and acceptable to users. The main challenges concern hardware malfunction and software misuse.CONCLUSION: This study offers a broad overview of the innovative field of eHealth applications in TB. Different studies implementing eHealth solutions consistently reported on benefits, but also on specific challenges. eHealth is a promising field of research and could enhance clinical management of TB.
Collapse
Affiliation(s)
- I Margineanu
- Department of Clinical Pharmacy and Pharmacology, University Medical Centrum Groningen, University of Groningen, Groningen, the Netherlands, Iasi Pulmonary Diseases University Hospital, Iasi, Romania
| | - C Louka
- Department of Internal Medicine/Infectious Diseases, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - O Akkerman
- Tuberculosis Center Beatrixoord, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands, Department of Pulmonary Diseases and Tuberculosis, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Y Stienstra
- Department of Internal Medicine/Infectious Diseases, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands, Department of Clinical Sciences, Liverpool School of Tropical Medicine, Liverpool, UK
| | - J-W Alffenaar
- Department of Clinical Pharmacy and Pharmacology, University Medical Centrum Groningen, University of Groningen, Groningen, the Netherlands, Faculty of Medicine and Health, School of Pharmacy, University of Sydney, Camperdown, NSW, Australia, Westmead Hospital, Sydney, NSW, Australia, Marie Bashir Institute for Infectious Diseases and Biosecurity, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
20
|
Muto R, Fukuta S, Watanabe T, Shindo Y, Kanemitsu Y, Kajikawa S, Yonezawa T, Inoue T, Ichihashi T, Shiratori Y, Maruyama S. Predicting oxygen requirements in patients with coronavirus disease 2019 using an artificial intelligence-clinician model based on local non-image data. Front Med (Lausanne) 2022; 9:1042067. [PMID: 36530899 PMCID: PMC9748157 DOI: 10.3389/fmed.2022.1042067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2022] [Indexed: 06/12/2024] Open
Abstract
BACKGROUND When facing unprecedented emergencies such as the coronavirus disease 2019 (COVID-19) pandemic, a predictive artificial intelligence (AI) model with real-time customized designs can be helpful for clinical decision-making support in constantly changing environments. We created models and compared the performance of AI in collaboration with a clinician and that of AI alone to predict the need for supplemental oxygen based on local, non-image data of patients with COVID-19. MATERIALS AND METHODS We enrolled 30 patients with COVID-19 who were aged >60 years on admission and not treated with oxygen therapy between December 1, 2020 and January 4, 2021 in this 50-bed, single-center retrospective cohort study. The outcome was requirement for oxygen after admission. RESULTS The model performance to predict the need for oxygen by AI in collaboration with a clinician was better than that by AI alone. Sodium chloride difference >33.5 emerged as a novel indicator to predict the need for oxygen in patients with COVID-19. To prevent severe COVID-19 in older patients, dehydration compensation may be considered in pre-hospitalization care. CONCLUSION In clinical practice, our approach enables the building of a better predictive model with prompt clinician feedback even in new scenarios. These can be applied not only to current and future pandemic situations but also to other diseases within the healthcare system.
Collapse
Affiliation(s)
- Reiko Muto
- Department of Nephrology, Nagoya University Graduate School of Medicine, Nagoya, Japan
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Molecular Medicine and Metabolism, Research Institute of Environmental Medicine, Nagoya University, Nagoya, Japan
| | - Shigeki Fukuta
- Artificial Intelligence Laboratory, Fujitsu Limited, Kawasaki, Japan
| | | | - Yuichiro Shindo
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Respiratory Medicine, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Yoshihiro Kanemitsu
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Respiratory Medicine, Allergy and Clinical Immunology, Nagoya City University Graduate School of Medical Sciences, Nagoya, Japan
| | - Shigehisa Kajikawa
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Respiratory Medicine and Allergology, Aichi Medical University Hospital, Nagakute, Japan
| | - Toshiyuki Yonezawa
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Respiratory Medicine and Allergology, Aichi Medical University Hospital, Nagakute, Japan
| | - Takahiro Inoue
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
- Department of Respiratory Medicine, Fujita Health University School of Medicine, Toyoake, Japan
| | - Takuji Ichihashi
- Department of Internal Medicine, Aichi Prefectural Aichi Hospital, Okazaki, Japan
| | - Yoshimune Shiratori
- Center for Healthcare Information Technology (C-HiT), Nagoya University, Nagoya, Japan
- Medical IT Center, Nagoya University Hospital, Nagoya, Japan
| | - Shoichi Maruyama
- Department of Nephrology, Nagoya University Graduate School of Medicine, Nagoya, Japan
| |
Collapse
|
21
|
Li J, Zhou L, Zhan Y, Xu H, Zhang C, Shan F, Liu L. How does the artificial intelligence-based image-assisted technique help physicians in diagnosis of pulmonary adenocarcinoma? A randomized controlled experiment of multicenter physicians in China. J Am Med Inform Assoc 2022; 29:2041-2049. [PMID: 36228127 PMCID: PMC9667181 DOI: 10.1093/jamia/ocac179] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 08/24/2022] [Accepted: 09/24/2022] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE Although artificial intelligence (AI) has achieved high levels of accuracy in the diagnosis of various diseases, its impact on physicians' decision-making performance in clinical practice is uncertain. This study aims to assess the impact of AI on the diagnostic performance of physicians with differing levels of self-efficacy under working conditions involving different time pressures. MATERIALS AND METHODS A 2 (independent diagnosis vs AI-assisted diagnosis) × 2 (no time pressure vs 2-minute time limit) randomized controlled experiment of multicenter physicians was conducted. Participants diagnosed 10 pulmonary adenocarcinoma cases and their diagnostic accuracy, sensitivity, and specificity were evaluated. Data analysis was performed using multilevel logistic regression. RESULTS One hundred and four radiologists from 102 hospitals completed the experiment. The results reveal (1) AI greatly increases physicians' diagnostic accuracy, either with or without time pressure; (2) when no time pressure, AI significantly improves physicians' diagnostic sensitivity but no significant change in specificity, while under time pressure, physicians' diagnostic sensitivity and specificity are both improved with the aid of AI; (3) when no time pressure, physicians with low self-efficacy benefit from AI assistance thus improving diagnostic accuracy but those with high self-efficacy do not, whereas physicians with low and high levels of self-efficacy both benefit from AI under time pressure. DISCUSSION This study is one of the first to provide real-world evidence regarding the impact of AI on physicians' decision-making performance, taking into account 2 boundary factors: clinical time pressure and physicians' self-efficacy. CONCLUSION AI-assisted diagnosis should be prioritized for physicians working under time pressure or with low self-efficacy.
Collapse
Affiliation(s)
- Jiaoyang Li
- School of Business Administration, Faculty of Business Administration, Southwestern University of Finance and Economics, Chengdu 611130, China
| | - Lingxiao Zhou
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Yi Zhan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Haifeng Xu
- Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Cheng Zhang
- School of Management, Fudan University, Shanghai 200433, China
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Lei Liu
- Intelligent Medicine Institute, Fudan University, Shanghai 200030, China
| |
Collapse
|
22
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
23
|
Han D, He T, Yu Y, Guo Y, Chen Y, Duan H, Yu N. Diagnosis of Active Pulmonary Tuberculosis and Community Acquired Pneumonia Using Convolution Neural Network Based on Transfer Learning. Acad Radiol 2022; 29:1486-1492. [PMID: 35063352 DOI: 10.1016/j.acra.2021.12.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/20/2021] [Accepted: 12/22/2021] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES A convolutional neural network (CNN) model for the diagnosis of active pulmonary tuberculosis (APTB) and community-acquired pneumonia (CAP) using chest radiographs (CRs) was constructed and verified based on transfer learning. MATERIALS AND METHODS CRs of 1247 APTB cases, 1488 CAP cases and 1247 normal cases were collected. All CRs were randomly divided into training set (1992 cases), validation set (1194 cases) and test set (796 cases) by stratified sampling in 5:3:2 radio. After normalization of CRs, the convolution base of pre-trained CNN (VGG16) model on ImageNet dataset was used to extract features, and the grid search was used to determine the optimal classifier module, which was added to the convolution base for transfer learning. After the training, the model with the highest accuracy of the validation set was selected as the optimal model to verify in the test set and calculate the accuracy of the model. RESULTS The accuracy of validation set in the 63rd epochs was the highest, which was 0.9430, and the corresponding Categorical crossentropy was 0.1742. The accuracy of the training set was 0.9428, and the Categorical crossentropy was 0.1545. When the optimal model was applied to the test set, the accuracy was 0.9447, and the Categorical crossentropy was 0.1929. CONCLUSION The transfer learning-based CNN model has good classification performance in the diagnosis of APTB, CAP and normal patients using CRs.
Collapse
Affiliation(s)
- Dong Han
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Taiping He
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Yong Yu
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Youmin Guo
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Yibing Chen
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Haifeng Duan
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China
| | - Nan Yu
- Department of Radiology (D.H., T.H., Y.Y., H.D., N.Y.), Affiliated Hospital of Shaanxi University of Chinese Medicine, Weiyang west Rd, Xianyang, Shaanxi 712000, China; College of Medical Technology (T.H., Y.Y.), Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China; Department of Medical Image (Y.G.), The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China; School of Information Science &Technology(Y.C.), Northwest University, Xi'an, Shaanxi, China.
| |
Collapse
|
24
|
|
25
|
Lee SY, Ha S, Jeon MG, Li H, Choi H, Kim HP, Choi YR, I H, Jeong YJ, Park YH, Ahn H, Hong SH, Koo HJ, Lee CW, Kim MJ, Kim YJ, Kim KW, Choi JM. Localization-adjusted diagnostic performance and assistance effect of a computer-aided detection system for pneumothorax and consolidation. NPJ Digit Med 2022; 5:107. [PMID: 35908091 PMCID: PMC9339006 DOI: 10.1038/s41746-022-00658-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 07/11/2022] [Indexed: 11/24/2022] Open
Abstract
While many deep-learning-based computer-aided detection systems (CAD) have been developed and commercialized for abnormality detection in chest radiographs (CXR), their ability to localize a target abnormality is rarely reported. Localization accuracy is important in terms of model interpretability, which is crucial in clinical settings. Moreover, diagnostic performances are likely to vary depending on thresholds which define an accurate localization. In a multi-center, stand-alone clinical trial using temporal and external validation datasets of 1,050 CXRs, we evaluated localization accuracy, localization-adjusted discrimination, and calibration of a commercially available deep-learning-based CAD for detecting consolidation and pneumothorax. The CAD achieved image-level AUROC (95% CI) of 0.960 (0.945, 0.975), sensitivity of 0.933 (0.899, 0.959), specificity of 0.948 (0.930, 0.963), dice of 0.691 (0.664, 0.718), moderate calibration for consolidation, and image-level AUROC of 0.978 (0.965, 0.991), sensitivity of 0.956 (0.923, 0.978), specificity of 0.996 (0.989, 0.999), dice of 0.798 (0.770, 0.826), moderate calibration for pneumothorax. Diagnostic performances varied substantially when localization accuracy was accounted for but remained high at the minimum threshold of clinical relevance. In a separate trial for diagnostic impact using 461 CXRs, the causal effect of the CAD assistance on clinicians’ diagnostic performances was estimated. After adjusting for age, sex, dataset, and abnormality type, the CAD improved clinicians’ diagnostic performances on average (OR [95% CI] = 1.73 [1.30, 2.32]; p < 0.001), although the effects varied substantially by clinical backgrounds. The CAD was found to have high stand-alone diagnostic performances and may beneficially impact clinicians’ diagnostic performances when used in clinical settings.
Collapse
Affiliation(s)
- Sun Yeop Lee
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Sangwoo Ha
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Min Gyeong Jeon
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hao Li
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hyunju Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hwa Pyung Kim
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Ye Ra Choi
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hoseok I
- Department of Thoracic and Cardiovascular Surgery, Pusan National University School of Medicine, Busan, Republic of Korea.,Convergence Medical Institute of Technology, Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yeon Joo Jeong
- Department of Radiology and Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yoon Ha Park
- Department of Internal Medicine, Jawol Health Center, Incheon, Republic of Korea
| | - Hyemin Ahn
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Hyup Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyun Jung Koo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Choong Wook Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Min Jae Kim
- Department of Infectious Disease, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yeon Joo Kim
- Department of Respiratory Allergy Medicine, Nowon Eulji Medical Center, Seoul, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jong Mun Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea.
| |
Collapse
|
26
|
Liang S, Ma J, Wang G, Shao J, Li J, Deng H, Wang C, Li W. The Application of Artificial Intelligence in the Diagnosis and Drug Resistance Prediction of Pulmonary Tuberculosis. Front Med (Lausanne) 2022; 9:935080. [PMID: 35966878 PMCID: PMC9366014 DOI: 10.3389/fmed.2022.935080] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jiechao Ma
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Gang Wang
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jingwei Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Hui Deng
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Hui Deng,
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Chengdi Wang,
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Weimin Li,
| |
Collapse
|
27
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
28
|
Tang D, Ni M, Zheng C, Ding X, Zhang N, Yang T, Zhan Q, Fu Y, Liu W, Zhuang D, Lv Y, Xu G, Wang L, Zou X. A deep learning-based model improves diagnosis of early gastric cancer under narrow band imaging endoscopy. Surg Endosc 2022; 36:7800-7810. [PMID: 35641698 DOI: 10.1007/s00464-022-09319-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 04/27/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Diagnosis of early gastric cancer (EGC) under narrow band imaging endoscopy (NBI) is dependent on expertise and skills. We aimed to elucidate whether artificial intelligence (AI) could diagnose EGC under NBI and evaluate the diagnostic assistance of the AI system. METHODS In this retrospective diagnostic study, 21,785 NBI images and 20 videos from five centers were divided into a training dataset (13,151 images, 810 patients), an internal validation dataset (7057 images, 283 patients), four external validation datasets (1577 images, 147 patients), and a video validation dataset (20 videos, 20 patients). All the images were labeled manually and used to train an AI system using You look only once v3 (YOLOv3). Next, the diagnostic performance of the AI system and endoscopists were compared and the diagnostic assistance of the AI system was assessed. The accuracy, sensitivity, specificity, and AUC were primary outcomes. RESULTS The AI system diagnosed EGCs on validation datasets with AUCs of 0.888-0.951 and diagnosed all the EGCs (100.0%) in video dataset. The AI system achieved better diagnostic performance (accuracy, 93.2%, 95% CI, 90.0-94.9%) than senior (85.9%, 95% CI, 84.2-87.4%) and junior (79.5%, 95% CI, 77.8-81.0%) endoscopists. The AI system significantly enhanced the performance of endoscopists in senior (89.4%, 95% CI, 87.9-90.7%) and junior (84.9%, 95% CI, 83.4-86.3%) endoscopists. CONCLUSION The NBI AI system outperformed the endoscopists and exerted potential assistant impact in EGC identification. Prospective validations are needed to evaluate the clinical reinforce of the system in real clinical practice.
Collapse
Affiliation(s)
- Dehua Tang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Muhan Ni
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Chang Zheng
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Xiwei Ding
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Nina Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Tian Yang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Qiang Zhan
- Department of Gastroenterology, Wuxi People's Hospital, Affiliated Wuxi People's Hospital With Nanjing Medical University, Wuxi, 214023, Jiangsu, China
| | - Yiwei Fu
- Department of Gastroenterology, Taizhou People's Hospital, The Fifth Affiliated Hospital With Nantong University, Taizhou, 225300, Jiangsu, China
| | - Wenjia Liu
- Department of Gastroenterology, Changzhou Second People's Hospital, Changzhou, 213003, Jiangsu, China
| | - Duanming Zhuang
- Department of Gastroenterology, Nanjing Gaochun People's Hospital, Nanjing, 211300, Jiangsu, China
| | - Ying Lv
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Guifang Xu
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| | - Lei Wang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| | - Xiaoping Zou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China.
| |
Collapse
|
29
|
Nijiati M, Ma J, Hu C, Tuersun A, Abulizi A, Kelimu A, Zhang D, Li G, Zou X. Artificial Intelligence Assisting the Early Detection of Active Pulmonary Tuberculosis From Chest X-Rays: A Population-Based Study. Front Mol Biosci 2022; 9:874475. [PMID: 35463963 PMCID: PMC9023793 DOI: 10.3389/fmolb.2022.874475] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 03/08/2022] [Indexed: 11/13/2022] Open
Abstract
As a major infectious disease, tuberculosis (TB) still poses a threat to people’s health in China. As a triage test for TB, reading chest radiography with traditional approach ends up with high inter-radiologist and intra-radiologist variability, moderate specificity and a waste of time and medical resources. Thus, this study established a deep convolutional neural network (DCNN) based artificial intelligence (AI) algorithm, aiming at diagnosing TB on posteroanterior chest X-ray photographs in an effective and accurate way. Altogether, 5,000 patients with TB and 4,628 patients without TB were included in the study, totaling to 9,628 chest X-ray photographs analyzed. Splitting the radiographs into a training set (80.4%) and a testing set (19.6%), three different DCNN algorithms, including ResNet, VGG, and AlexNet, were trained to classify the chest radiographs as images of pulmonary TB or without TB. Both the diagnostic accuracy and the area under the receiver operating characteristic curve were used to evaluate the performance of the three AI diagnosis models. Reaching an accuracy of 96.73% and marking the precise TB regions on the radiographs, ResNet algorithm-based AI outperformed the rest models and showed excellent diagnostic ability in different clinical subgroups in the stratification analysis. In summary, the ResNet algorithm-based AI diagnosis system provided accurate TB diagnosis, which could have broad prospects in clinical application for TB diagnosis, especially in poor regions with high TB incidence.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| | - Jie Ma
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Chuling Hu
- Department of Colorectal Surgery, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Abudoureyimu Kelimu
- Department of Radiology, Kashi Area Tuberculosis Control Center, Kashi, China
| | - Dongyu Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Guanbin Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| |
Collapse
|
30
|
Accurate auto-labeling of chest X-ray images based on quantitative similarity to an explainable AI model. Nat Commun 2022; 13:1867. [PMID: 35388010 PMCID: PMC8986787 DOI: 10.1038/s41467-022-29437-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 03/14/2022] [Indexed: 01/20/2023] Open
Abstract
The inability to accurately, efficiently label large, open-access medical imaging datasets limits the widespread implementation of artificial intelligence models in healthcare. There have been few attempts, however, to automate the annotation of such public databases; one approach, for example, focused on labor-intensive, manual labeling of subsets of these datasets to be used to train new models. In this study, we describe a method for standardized, automated labeling based on similarity to a previously validated, explainable AI (xAI) model-derived-atlas, for which the user can specify a quantitative threshold for a desired level of accuracy (the probability-of-similarity, pSim metric). We show that our xAI model, by calculating the pSim values for each clinical output label based on comparison to its training-set derived reference atlas, can automatically label the external datasets to a user-selected, high level of accuracy, equaling or exceeding that of human experts. We additionally show that, by fine-tuning the original model using the automatically labelled exams for retraining, performance can be preserved or improved, resulting in a highly accurate, more generalized model. Here the authors develop a method for accurate auto-labelling of CXR images from large public datasets based on quantitative probability-of similarity to an explainable AI model. The labels can be used to fine-tune the original model through iterative re-training.
Collapse
|
31
|
Oloko-Oba M, Viriri S. A Systematic Review of Deep Learning Techniques for Tuberculosis Detection From Chest Radiograph. Front Med (Lausanne) 2022; 9:830515. [PMID: 35355598 PMCID: PMC8960068 DOI: 10.3389/fmed.2022.830515] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 02/14/2022] [Indexed: 11/27/2022] Open
Abstract
The high mortality rate in Tuberculosis (TB) burden regions has increased significantly in the last decades. Despite the possibility of treatment for TB, high burden regions still suffer inadequate screening tools, which result in diagnostic delay and misdiagnosis. These challenges have led to the development of Computer-Aided Diagnostic (CAD) system to detect TB automatically. There are several ways of screening for TB, but Chest X-Ray (CXR) is more prominent and recommended due to its high sensitivity in detecting lung abnormalities. This paper presents the results of a systematic review based on PRISMA procedures that investigate state-of-the-art Deep Learning techniques for screening pulmonary abnormalities related to TB. The systematic review was conducted using an extensive selection of scientific databases as reference sources that grant access to distinctive articles in the field. Four scientific databases were searched to retrieve related articles. Inclusion and exclusion criteria were defined and applied to each article to determine those included in the study. Out of the 489 articles retrieved, 62 were included. Based on the findings in this review, we conclude that CAD systems are promising in tackling the challenges of the TB epidemic and made recommendations for improvement in future studies.
Collapse
|
32
|
Wang M, Wei Z, Jia M, Chen L, Ji H. Deep learning model for multi-classification of infectious diseases from unstructured electronic medical records. BMC Med Inform Decis Mak 2022; 22:41. [PMID: 35168624 PMCID: PMC8848865 DOI: 10.1186/s12911-022-01776-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/28/2022] [Indexed: 01/21/2023] Open
Abstract
Purpose Predictively diagnosing infectious diseases helps in providing better treatment and enhances the prevention and control of such diseases. This study uses actual data from a hospital. A multiple infectious disease diagnostic model (MIDDM) is designed for conducting multi-classification of infectious diseases so as to assist in clinical infectious-disease decision-making. Methods Based on actual hospital medical records of infectious diseases from December 2012 to December 2020, a deep learning model for multi-classification research on infectious diseases is constructed. The data includes 20,620 cases covering seven types of infectious diseases, including outpatients and inpatients, of which training data accounted for 80%, i.e., 16,496 cases, and test data accounted for 20%, i.e., 4124 cases. Through the auto-encoder, data normalization and sparse data densification processing are carried out to improve the model training effect. A residual network and attention mechanism are introduced into the MIDDM model to improve the performance of the model. Result MIDDM achieved improved prediction results in diagnosing seven kinds of infectious diseases. In the case of similar disease diagnosis characteristics and similar interference factors, the prediction accuracy of disease classification with more sample data is significantly higher than the prediction accuracy of disease classification with fewer sample data. For instance, the training data for viral hepatitis, influenza, and hand foot and mouth disease were 2954, 3924, and 3015 respectively and the corresponding test accuracy rates were 99.86%, 98.47%, and 97.31%. There is less training data for syphilis, infectious diarrhea, and measles, i.e., 1208, 575, and 190 respectively and the corresponding test accuracy rates were noticeably lower, i.e., 83.03%, 87.30%, and42.11%. We also compared the MIDDM model with the models used in other studies. Using the same input data, taking viral hepatitis as an example, the accuracy of MIDDM is 99.44%, which is significantly higher than that of XGBoost (96.19%), Decision tree (90.13%), Bayesian method (85.19%), and logistic regression (91.26%). Other diseases were also significantly better predicted by MIDDM than by these three models. Conclusion The application of the MIDDM model to multi-class diagnosis and prediction of infectious diseases can improve the accuracy of infectious-disease diagnosis. However, these results need to be further confirmed via clinical randomized controlled trials.
Collapse
Affiliation(s)
- Mengying Wang
- Information Management and Big Data Center, Peking University Third Hospital, Beijing, China
| | - Zhenhao Wei
- Goodwill Hessian Health Technology Co. Ltd, Beijing, China
| | - Mo Jia
- Information Management and Big Data Center, Peking University Third Hospital, Beijing, China
| | - Lianzhong Chen
- Goodwill Hessian Health Technology Co. Ltd, Beijing, China
| | - Hong Ji
- Information Management and Big Data Center, Peking University Third Hospital, Beijing, China.
| |
Collapse
|
33
|
Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med 2022; 28:31-38. [PMID: 35058619 DOI: 10.1038/s41591-021-01614-0] [Citation(s) in RCA: 465] [Impact Index Per Article: 232.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 11/05/2021] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track and share key developments in medical AI. We cover prospective studies and advances in medical image analysis, which have reduced the gap between research and deployment. We also address several promising avenues for novel medical AI research, including non-image data sources, unconventional problem formulations and human-AI collaboration. Finally, we consider serious technical and ethical challenges in issues spanning from data scarcity to racial bias. As these challenges are addressed, AI's potential may be realized, making healthcare more accurate, efficient and accessible for patients worldwide.
Collapse
Affiliation(s)
- Pranav Rajpurkar
- Department of Biomedical Informatics, Harvard University, Cambridge, MA, USA
| | - Emma Chen
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Oishi Banerjee
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Eric J Topol
- Scripps Translational Science Institute, San Diego, CA, USA.
| |
Collapse
|
34
|
A Machine Learning Model for Predicting Unscheduled 72 h Return Visits to the Emergency Department by Patients with Abdominal Pain. Diagnostics (Basel) 2021; 12:diagnostics12010082. [PMID: 35054249 PMCID: PMC8775134 DOI: 10.3390/diagnostics12010082] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 12/12/2022] Open
Abstract
Seventy-two-hour unscheduled return visits (URVs) by emergency department patients are a key clinical index for evaluating the quality of care in emergency departments (EDs). This study aimed to develop a machine learning model to predict 72 h URVs for ED patients with abdominal pain. Electronic health records data were collected from the Chang Gung Research Database (CGRD) for 25,151 ED visits by patients with abdominal pain and a total of 617 features were used for analysis. We used supervised machine learning models, namely logistic regression (LR), support vector machine (SVM), random forest (RF), extreme gradient boosting (XGB), and voting classifier (VC), to predict URVs. The VC model achieved more favorable overall performance than other models (AUROC: 0.74; 95% confidence interval (CI), 0.69–0.76; sensitivity, 0.39; specificity, 0.89; F1 score, 0.25). The reduced VC model achieved comparable performance (AUROC: 0.72; 95% CI, 0.69–0.74) to the full models using all clinical features. The VC model exhibited the most favorable performance in predicting 72 h URVs for patients with abdominal pain, both for all-features and reduced-features models. Application of the VC model in the clinical setting after validation may help physicians to make accurate decisions and decrease URVs.
Collapse
|
35
|
Li D, Pehrson LM, Lauridsen CA, Tøttrup L, Fraccaro M, Elliott D, Zając HD, Darkner S, Carlsen JF, Nielsen MB. The Added Effect of Artificial Intelligence on Physicians' Performance in Detecting Thoracic Pathologies on CT and Chest X-ray: A Systematic Review. Diagnostics (Basel) 2021; 11:diagnostics11122206. [PMID: 34943442 PMCID: PMC8700414 DOI: 10.3390/diagnostics11122206] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/18/2021] [Accepted: 11/23/2021] [Indexed: 12/20/2022] Open
Abstract
Our systematic review investigated the additional effect of artificial intelligence-based devices on human observers when diagnosing and/or detecting thoracic pathologies using different diagnostic imaging modalities, such as chest X-ray and CT. Peer-reviewed, original research articles from EMBASE, PubMed, Cochrane library, SCOPUS, and Web of Science were retrieved. Included articles were published within the last 20 years and used a device based on artificial intelligence (AI) technology to detect or diagnose pulmonary findings. The AI-based device had to be used in an observer test where the performance of human observers with and without addition of the device was measured as sensitivity, specificity, accuracy, AUC, or time spent on image reading. A total of 38 studies were included for final assessment. The quality assessment tool for diagnostic accuracy studies (QUADAS-2) was used for bias assessment. The average sensitivity increased from 67.8% to 74.6%; specificity from 82.2% to 85.4%; accuracy from 75.4% to 81.7%; and Area Under the ROC Curve (AUC) from 0.75 to 0.80. Generally, a faster reading time was reported when radiologists were aided by AI-based devices. Our systematic review showed that performance generally improved for the physicians when assisted by AI-based devices compared to unaided interpretation.
Collapse
Affiliation(s)
- Dana Li
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Correspondence:
| | - Lea Marie Pehrson
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
| | - Carsten Ammitzbøl Lauridsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Technology, Faculty of Health and Technology, University College Copenhagen, 2200 Copenhagen, Denmark
| | - Lea Tøttrup
- Unumed Aps, 1055 Copenhagen, Denmark; (L.T.); (M.F.)
| | | | - Desmond Elliott
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Hubert Dariusz Zając
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, 2100 Copenhagen, Denmark; (D.E.); (H.D.Z.); (S.D.)
| | - Jonathan Frederik Carlsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
| | - Michael Bachmann Nielsen
- Department of Diagnostic Radiology, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark; (L.M.P.); (C.A.L.); (J.F.C.); (M.B.N.)
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| |
Collapse
|
36
|
A computed tomography vertebral segmentation dataset with anatomical variations and multi-vendor scanner data. Sci Data 2021; 8:284. [PMID: 34711848 PMCID: PMC8553749 DOI: 10.1038/s41597-021-01060-0] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023] Open
Abstract
With the advent of deep learning algorithms, fully automated radiological image analysis is within reach. In spine imaging, several atlas- and shape-based as well as deep learning segmentation algorithms have been proposed, allowing for subsequent automated analysis of morphology and pathology. The first “Large Scale Vertebrae Segmentation Challenge” (VerSe 2019) showed that these perform well on normal anatomy, but fail in variants not frequently present in the training dataset. Building on that experience, we report on the largely increased VerSe 2020 dataset and results from the second iteration of the VerSe challenge (MICCAI 2020, Lima, Peru). VerSe 2020 comprises annotated spine computed tomography (CT) images from 300 subjects with 4142 fully visualized and annotated vertebrae, collected across multiple centres from four different scanner manufacturers, enriched with cases that exhibit anatomical variants such as enumeration abnormalities (n = 77) and transitional vertebrae (n = 161). Metadata includes vertebral labelling information, voxel-level segmentation masks obtained with a human-machine hybrid algorithm and anatomical ratings, to enable the development and benchmarking of robust and accurate segmentation algorithms. Measurement(s) | vertebra | Technology Type(s) | computed tomography | Factor Type(s) | imaging centre • scanner manufacturer | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.14716968
Collapse
|
37
|
Hanif AM, Beqiri S, Keane PA, Campbell JP. Applications of interpretability in deep learning models for ophthalmology. Curr Opin Ophthalmol 2021; 32:452-458. [PMID: 34231530 PMCID: PMC8373813 DOI: 10.1097/icu.0000000000000780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. RECENT FINDINGS The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models. SUMMARY Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.
Collapse
Affiliation(s)
- Adam M. Hanif
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Sara Beqiri
- University College London Division of Medicine, London, United Kingdom
| | - Pearse A. Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- University College London Institute of Ophthalmology, United Kingdom
| | | |
Collapse
|
38
|
Chi EA, Chi G, Tsui CT, Jiang Y, Jarr K, Kulkarni CV, Zhang M, Long J, Ng AY, Rajpurkar P, Sinha SR. Development and Validation of an Artificial Intelligence System to Optimize Clinician Review of Patient Records. JAMA Netw Open 2021; 4:e2117391. [PMID: 34297075 PMCID: PMC8303101 DOI: 10.1001/jamanetworkopen.2021.17391] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
IMPORTANCE Physicians are required to work with rapidly growing amounts of medical data. Approximately 62% of time per patient is devoted to reviewing electronic health records (EHRs), with clinical data review being the most time-consuming portion. OBJECTIVE To determine whether an artificial intelligence (AI) system developed to organize and display new patient referral records would improve a clinician's ability to extract patient information compared with the current standard of care. DESIGN, SETTING, AND PARTICIPANTS In this prognostic study, an AI system was created to organize patient records and improve data retrieval. To evaluate the system on time and accuracy, a nonblinded, prospective study was conducted at a single academic medical center. Recruitment emails were sent to all physicians in the gastroenterology division, and 12 clinicians agreed to participate. Each of the clinicians participating in the study received 2 referral records: 1 AI-optimized patient record and 1 standard (non-AI-optimized) patient record. For each record, clinicians were asked 22 questions requiring them to search the assigned record for clinically relevant information. Clinicians reviewed records from June 1 to August 30, 2020. MAIN OUTCOMES AND MEASURES The time required to answer each question, along with accuracy, was measured for both records, with and without AI optimization. Participants were asked to assess overall satisfaction with the AI system, their preferred review method (AI-optimized vs standard), and other topics to assess clinical utility. RESULTS Twelve gastroenterology physicians/fellows completed the study. Compared with standard (non-AI-optimized) patient record review, the AI system saved first-time physician users 18% of the time used to answer the clinical questions (10.5 [95% CI, 8.5-12.6] vs 12.8 [95% CI, 9.4-16.2] minutes; P = .02). There was no significant decrease in accuracy when physicians retrieved important patient information (83.7% [95% CI, 79.3%-88.2%] with the AI-optimized vs 86.0% [95% CI, 81.8%-90.2%] without the AI-optimized record; P = .81). Survey responses from physicians were generally positive across all questions. Eleven of 12 physicians (92%) preferred the AI-optimized record review to standard review. Despite a learning curve pointed out by respondents, 11 of 12 physicians believed that the technology would save them time to assess new patient records and were interested in using this technology in their clinic. CONCLUSIONS AND RELEVANCE In this prognostic study, an AI system helped physicians extract relevant patient information in a shorter time while maintaining high accuracy. This finding is particularly germane to the ever-increasing amounts of medical data and increased stressors on clinicians. Increased user familiarity with the AI system, along with further enhancements in the system itself, hold promise to further improve physician data extraction from large quantities of patient health records.
Collapse
Affiliation(s)
- Ethan Andrew Chi
- Department of Computer Science, Stanford University, Stanford, California
| | - Gordon Chi
- Department of Computer Science, Stanford University, Stanford, California
| | - Cheuk To Tsui
- Department of Computer Science, Stanford University, Stanford, California
| | - Yan Jiang
- Division of Gastroenterology and Hepatology, Department of Medicine, Stanford University, Stanford, California
| | - Karolin Jarr
- Division of Gastroenterology and Hepatology, Department of Medicine, Stanford University, Stanford, California
| | - Chiraag V. Kulkarni
- Division of Gastroenterology and Hepatology, Department of Medicine, Stanford University, Stanford, California
| | - Michael Zhang
- Department of Neurosurgery, Stanford University, Stanford, California
| | - Jin Long
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, California
| | - Andrew Y. Ng
- Department of Computer Science, Stanford University, Stanford, California
| | - Pranav Rajpurkar
- Department of Computer Science, Stanford University, Stanford, California
| | - Sidhartha R. Sinha
- Division of Gastroenterology and Hepatology, Department of Medicine, Stanford University, Stanford, California
| |
Collapse
|
39
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
40
|
Zandehshahvar M, van Assen M, Maleki H, Kiarashi Y, De Cecco CN, Adibi A. Toward understanding COVID-19 pneumonia: a deep-learning-based approach for severity analysis and monitoring the disease. Sci Rep 2021; 11:11112. [PMID: 34045510 PMCID: PMC8159925 DOI: 10.1038/s41598-021-90411-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/21/2021] [Indexed: 12/23/2022] Open
Abstract
We report a new approach using artificial intelligence (AI) to study and classify the severity of COVID-19 using 1208 chest X-rays (CXRs) of 396 COVID-19 patients obtained through the course of the disease at Emory Healthcare affiliated hospitals (Atlanta, GA, USA). Using a two-stage transfer learning technique to train a convolutional neural network (CNN), we show that the algorithm is able to classify four classes of disease severity (normal, mild, moderate, and severe) with the average Area Under the Curve (AUC) of 0.93. In addition, we show that the outputs of different layers of the CNN under dominant filters provide valuable insight about the subtle patterns in the CXRs, which can improve the accuracy in the reading of CXRs by a radiologist. Finally, we show that our approach can be used for studying the disease progression in a single patient and its influencing factors. The results suggest that our technique can form the foundation of a more concrete clinical model to predict the evolution of COVID-19 severity and the efficacy of different treatments for each patient through using CXRs and clinical data in the early stages of the disease. This use of AI to assess the severity and possibly predicting the future stages of the disease early on, will be essential in dealing with the upcoming waves of COVID-19 and optimizing resource allocation and treatment.
Collapse
Affiliation(s)
| | - Marly van Assen
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Hossein Maleki
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Yashar Kiarashi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Carlo N De Cecco
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Ali Adibi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
41
|
Luján-García JE, Villuendas-Rey Y, López-Yáñez I, Camacho-Nieto O, Yáñez-Márquez C. NanoChest-Net: A Simple Convolutional Network for Radiological Studies Classification. Diagnostics (Basel) 2021; 11:775. [PMID: 33925844 PMCID: PMC8145173 DOI: 10.3390/diagnostics11050775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 04/13/2021] [Accepted: 04/22/2021] [Indexed: 11/16/2022] Open
Abstract
The new coronavirus disease (COVID-19), pneumonia, tuberculosis, and breast cancer have one thing in common: these diseases can be diagnosed using radiological studies such as X-rays images. With radiological studies and technology, computer-aided diagnosis (CAD) results in a very useful technique to analyze and detect abnormalities using the images generated by X-ray machines. Some deep-learning techniques such as a convolutional neural network (CNN) can help physicians to obtain an effective pre-diagnosis. However, popular CNNs are enormous models and need a huge amount of data to obtain good results. In this paper, we introduce NanoChest-net, which is a small but effective CNN model that can be used to classify among different diseases using images from radiological studies. NanoChest-net proves to be effective in classifying among different diseases such as tuberculosis, pneumonia, and COVID-19. In two of the five datasets used in the experiments, NanoChest-net obtained the best results, while on the remaining datasets our model proved to be as good as baseline models from the state of the art such as the ResNet50, Xception, and DenseNet121. In addition, NanoChest-net is useful to classify radiological studies on the same level as state-of-the-art algorithms with the advantage that it does not require a large number of operations.
Collapse
Affiliation(s)
| | - Yenny Villuendas-Rey
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Itzamá López-Yáñez
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Oscar Camacho-Nieto
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Cornelio Yáñez-Márquez
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City 07700, Mexico
| |
Collapse
|