1
|
Van Diepen MR, Wijffels MME, Verhofstad MHJ, Van Lieshout EMM. Classification methods of pulmonary contusion based on chest CT and the association with in-hospital outcomes: a systematic review of literature. Eur J Trauma Emerg Surg 2024:10.1007/s00068-024-02666-w. [PMID: 39251438 DOI: 10.1007/s00068-024-02666-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/24/2024] [Indexed: 09/11/2024]
Abstract
INTRODUCTION Patients sustaining pulmonary contusion (PC) have a higher risk of complications and long-term respiratory difficulty. Computed tomography (CT) scans have a high sensitivity for PC. However, since PC develops over time, CT scans made directly post-trauma may underestimate the full extent of PC. This creates a need to better define in which PC-patients complications are more likely. The aim of this systematic review was to identify different classification systems of PC, and investigate the association between amount of PC and in-hospital outcomes. METHODS A systematic review was conducted in accordance with PRISMA guidelines. Studies reporting a classification system for PC after blunt thoracic trauma based on a CT scan were included. Outcomes were classification method of PC and the relation between classification and pulmonary complications and in-hospital outcomes. RESULTS Twenty studies were included. Total number of patients ranged from 49 to 148,140 patients. The most common classification system used was calculating the percentage of contused lung volume. Other classification methods were based on Blunt Pulmonary Contusion score-6 and -18, Abbreviated Injury Score and Thoracic Trauma Severity scores. Worse outcomes were generally associated with between > 18 to > 24% contusion volume. DISCUSSION The heterogeneity of currently available literature makes comparing classification methods challenging. The most common classification of PC was based on volumetric analysis. Calculating a percentage of PC as part of the total volume allows for the highest level of segmentation of lung parenchyma as compared to using BPC-6, BPC-18, or AIS. Contusion volume exceeding 18-24% was generally associated with worse outcomes.
Collapse
Affiliation(s)
- Max R Van Diepen
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Mathieu M E Wijffels
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Michael H J Verhofstad
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands
| | - Esther M M Van Lieshout
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands.
| |
Collapse
|
2
|
Zhao T, Meng X, Wang Z, Hu Y, Fan H, Han J, Zhu N, Niu F. Diagnostic evaluation of blunt chest trauma by imaging-based application of artificial intelligence: A review. Am J Emerg Med 2024; 85:35-43. [PMID: 39213808 DOI: 10.1016/j.ajem.2024.08.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Artificial intelligence (AI) is becoming increasingly integral in clinical practice, such as during imaging tasks associated with the diagnosis and evaluation of blunt chest trauma (BCT). Due to significant advances in imaging-based deep learning, recent studies have demonstrated the efficacy of AI in the diagnosis of BCT, with a focus on rib fractures, pulmonary contusion, hemopneumothorax and others, demonstrating significant clinical progress. However, the complicated nature of BCT presents challenges in providing a comprehensive diagnosis and prognostic evaluation, and current deep learning research concentrates on specific clinical contexts, limiting its utility in addressing BCT intricacies. Here, we provide a review of the available evidence surrounding the potential utility of AI in BCT, and additionally identify the challenges impeding its development. This review offers insights on how to optimize the role of AI in the diagnostic evaluation of BCT, which can ultimately enhance patient care and outcomes in this critical clinical domain.
Collapse
Affiliation(s)
- Tingting Zhao
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China
| | - Xianghong Meng
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China.
| | - Zhi Wang
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China.
| | - Yongcheng Hu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China
| | - Hongxing Fan
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| | - Jun Han
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China
| | - Nana Zhu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| | - Feige Niu
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin Medical University, Tianjin, China
| |
Collapse
|
3
|
Cheng CT, Lin HH, Hsu CP, Chen HW, Huang JF, Hsieh CH, Fu CY, Chung IF, Liao CH. Deep Learning for Automated Detection and Localization of Traumatic Abdominal Solid Organ Injuries on CT Scans. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1113-1123. [PMID: 38366294 PMCID: PMC11169164 DOI: 10.1007/s10278-024-01038-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 01/31/2024] [Accepted: 02/01/2024] [Indexed: 02/18/2024]
Abstract
Computed tomography (CT) is the most commonly used diagnostic modality for blunt abdominal trauma (BAT), significantly influencing management approaches. Deep learning models (DLMs) have shown great promise in enhancing various aspects of clinical practice. There is limited literature available on the use of DLMs specifically for trauma image evaluation. In this study, we developed a DLM aimed at detecting solid organ injuries to assist medical professionals in rapidly identifying life-threatening injuries. The study enrolled patients from a single trauma center who received abdominal CT scans between 2008 and 2017. Patients with spleen, liver, or kidney injury were categorized as the solid organ injury group, while others were considered negative cases. Only images acquired from the trauma center were enrolled. A subset of images acquired in the last year was designated as the test set, and the remaining images were utilized to train and validate the detection models. The performance of each model was assessed using metrics such as the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value based on the best Youden index operating point. The study developed the models using 1302 (87%) scans for training and tested them on 194 (13%) scans. The spleen injury model demonstrated an accuracy of 0.938 and a specificity of 0.952. The accuracy and specificity of the liver injury model were reported as 0.820 and 0.847, respectively. The kidney injury model showed an accuracy of 0.959 and a specificity of 0.989. We developed a DLM that can automate the detection of solid organ injuries by abdominal CT scans with acceptable diagnostic accuracy. It cannot replace the role of clinicians, but we can expect it to be a potential tool to accelerate the process of therapeutic decisions for trauma care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Hou-Hsien Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Huan-Wu Chen
- Department of Medical Imaging & Intervention, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Hsun Hsieh
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan.
| |
Collapse
|
4
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
5
|
Sarkar N, Zhang L, Campbell P, Liang Y, Li G, Khedr M, Khetan U, Dreizin D. Pulmonary contusion: automated deep learning-based quantitative visualization. Emerg Radiol 2023; 30:435-441. [PMID: 37318609 PMCID: PMC10527354 DOI: 10.1007/s10140-023-02149-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/07/2023] [Indexed: 06/16/2023]
Abstract
PURPOSE Rapid automated CT volumetry of pulmonary contusion may predict progression to Acute Respiratory Distress Syndrome (ARDS) and help guide early clinical management in at-risk trauma patients. This study aims to train and validate state-of-the-art deep learning models to quantify pulmonary contusion as a percentage of total lung volume (Lung Contusion Index, or auto-LCI) and assess the relationship between auto-LCI and relevant clinical outcomes. METHODS 302 adult patients (age ≥ 18) with pulmonary contusion were retrospectively identified from reports between 2016 and 2021. nnU-Net was trained on manual contusion and whole-lung segmentations. Point-of-care candidate variables for multivariate regression included oxygen saturation, heart rate, and systolic blood pressure on admission. Logistic regression was used to assess ARDS risk, and Cox proportional hazards models were used to determine differences in ICU length of stay and mechanical ventilation time. RESULTS Mean Volume Similarity Index and mean Dice scores were 0.82 and 0.67. Interclass correlation coefficient and Pearson r between ground-truth and predicted volumes were 0.90 and 0.91. 38 (14%) patients developed ARDS. In bivariate analysis, auto-LCI was associated with ARDS (p < 0.001), ICU admission (p < 0.001), and need for mechanical ventilation (p < 0.001). In multivariate analyses, auto-LCI was associated with ARDS (p = 0.04), longer length of stay in the ICU (p = 0.02) and longer time on mechanical ventilation (p = 0.04). AUC of multivariate regression to predict ARDS using auto-LCI and clinical variables was 0.70 while AUC using auto-LCI alone was 0.68. CONCLUSION Increasing auto-LCI values corresponded with increased risk of ARDS, longer ICU admissions, and longer periods of mechanical ventilation.
Collapse
Affiliation(s)
- Nathan Sarkar
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - Lei Zhang
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - Peter Campbell
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - Yuanyuan Liang
- Department of Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Guang Li
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - Mustafa Khedr
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - Udit Khetan
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA
| | - David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, 22 S Greene St, Baltimore, MD, 21201, USA.
| |
Collapse
|
6
|
Dreizin D, Zhang L, Sarkar N, Bodanapally UK, Li G, Hu J, Chen H, Khedr M, Khetan U, Campbell P, Unberath M. Accelerating voxelwise annotation of cross-sectional imaging through AI collaborative labeling with quality assurance and bias mitigation. FRONTIERS IN RADIOLOGY 2023; 3:1202412. [PMID: 37485306 PMCID: PMC10362988 DOI: 10.3389/fradi.2023.1202412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 06/22/2023] [Indexed: 07/25/2023]
Abstract
Background precision-medicine quantitative tools for cross-sectional imaging require painstaking labeling of targets that vary considerably in volume, prohibiting scaling of data annotation efforts and supervised training to large datasets for robust and generalizable clinical performance. A straight-forward time-saving strategy involves manual editing of AI-generated labels, which we call AI-collaborative labeling (AICL). Factors affecting the efficacy and utility of such an approach are unknown. Reduction in time effort is not well documented. Further, edited AI labels may be prone to automation bias. Purpose In this pilot, using a cohort of CTs with intracavitary hemorrhage, we evaluate both time savings and AICL label quality and propose criteria that must be met for using AICL annotations as a high-throughput, high-quality ground truth. Methods 57 CT scans of patients with traumatic intracavitary hemorrhage were included. No participant recruited for this study had previously interpreted the scans. nnU-net models trained on small existing datasets for each feature (hemothorax/hemoperitoneum/pelvic hematoma; n = 77-253) were used in inference. Two common scenarios served as baseline comparison- de novo expert manual labeling, and expert edits of trained staff labels. Parameters included time effort and image quality graded by a blinded independent expert using a 9-point scale. The observer also attempted to discriminate AICL and expert labels in a random subset (n = 18). Data were compared with ANOVA and post-hoc paired signed rank tests with Bonferroni correction. Results AICL reduced time effort 2.8-fold compared to staff label editing, and 8.7-fold compared to expert labeling (corrected p < 0.0006). Mean Likert grades for AICL (8.4, SD:0.6) were significantly higher than for expert labels (7.8, SD:0.9) and edited staff labels (7.7, SD:0.8) (corrected p < 0.0006). The independent observer failed to correctly discriminate AI and human labels. Conclusion For our use case and annotators, AICL facilitates rapid large-scale curation of high-quality ground truth. The proposed quality control regime can be employed by other investigators prior to embarking on AICL for segmentation tasks in large datasets.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Lei Zhang
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Nathan Sarkar
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Uttam K. Bodanapally
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Guang Li
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Jiazhen Hu
- Johns Hopkins University, Baltimore, MD, United States
| | - Haomin Chen
- Johns Hopkins University, Baltimore, MD, United States
| | - Mustafa Khedr
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Udit Khetan
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | - Peter Campbell
- Department of Diagnostic Radiology and Nuclear Medicine, School of Medicine, University of Maryland, Baltimore, MD, United States
| | | |
Collapse
|
7
|
Li N, Wu Z, Jiang C, Sun L, Li B, Guo J, Liu F, Zhou Z, Qin H, Tan W, Tian L. An automatic fresh rib fracture detection and positioning system using deep learning. Br J Radiol 2023; 96:20221006. [PMID: 36972072 PMCID: PMC10230380 DOI: 10.1259/bjr.20221006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/14/2023] [Accepted: 03/15/2023] [Indexed: 03/29/2023] Open
Abstract
OBJECTIVE To evaluate the performance and robustness of a deep learning-based automatic fresh rib fracture detection and positioning system (FRF-DPS). METHODS CT scans of 18,172 participants admitted to eight hospitals from June 2009 to March 2019 were retrospectively collected. Patients were divided into development set (14,241), multicenter internal test set (1612), and external test set (2319). In internal test set, sensitivity, false positives (FPs) and specificity were used to assess fresh rib fracture detection performance at the lesion- and examination-levels. In external test set, the performance of detecting fresh rib fractures by radiologist and FRF-DPS were evaluated at lesion, rib, and examination levels. Additionally, the accuracy of FRF-DPS in rib positioning was investigated by the ground-truth labeling. RESULTS In multicenter internal test set, FRF-DPS showed excellent performance at the lesion- (sensitivity: 0.933 [95%CI, 0.916-0.949], FPs: 0.50 [95%CI, 0.397-0.583]) and examination-level. In external test set, the sensitivity and FPs at the lesion-level of FRF-DPS (0.909 [95%CI, 0.883-0.926], p < 0.001; 0.379 [95%CI, 0.303-0.422], p = 0.001) were better than the radiologist (0.789 [95%CI, 0.766-0.807]; 0.496 [95%CI, 0.383-0.571]), so were the rib- and patient-levels. In subgroup analysis of CT parameters, FRF-DPS were robust (0.894-0.927). Finally, FRF-DPS(0.997 [95%CI, 0.992-1.000], p < 0.001) is more accurate than radiologist (0.981 [95%CI, 0.969-0.996]) in rib positioning and takes 20 times less time. CONCLUSION FRF-DPS achieved high detection rate of fresh rib fractures with low FP values, and precise positioning of ribs, thus can be used in clinical practice to improve the detection rate and work efficiency. ADVANCES IN KNOWLEDGE We developed the FRF-DPS system which can detect fresh rib fractures and rib position, and evaluated by a large amount of multicenter data.
Collapse
Affiliation(s)
- Ning Li
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Zhe Wu
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Chao Jiang
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Lulu Sun
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Bingyao Li
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Jun Guo
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Feng Liu
- Deepwise Artificial Intelligence (AI) Lab, Deepwise Inc., Beijing, China
| | - Zhen Zhou
- Deepwise Artificial Intelligence (AI) Lab, Deepwise Inc., Beijing, China
| | - Haibo Qin
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| | - Weixiong Tan
- Deepwise Artificial Intelligence (AI) Lab, Deepwise Inc., Beijing, China
| | - Lufeng Tian
- Department of Radiology, Fushun Central Hospital of Liaoning Province, Fushun, Liaoning Province, China
| |
Collapse
|
8
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
9
|
Dreizin D. The American Society of Emergency Radiology (ASER) AI/ML expert panel: inception, mandate, work products, and goals. Emerg Radiol 2023; 30:279-283. [PMID: 37071272 DOI: 10.1007/s10140-023-02135-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 04/19/2023]
Affiliation(s)
- David Dreizin
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma , Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
10
|
Dreizin D, Staziaki PV, Khatri GD, Beckmann NM, Feng Z, Liang Y, Delproposto ZS, Klug M, Spann JS, Sarkar N, Fu Y. Artificial intelligence CAD tools in trauma imaging: a scoping review from the American Society of Emergency Radiology (ASER) AI/ML Expert Panel. Emerg Radiol 2023; 30:251-265. [PMID: 36917287 PMCID: PMC10640925 DOI: 10.1007/s10140-023-02120-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
BACKGROUND AI/ML CAD tools can potentially improve outcomes in the high-stakes, high-volume model of trauma radiology. No prior scoping review has been undertaken to comprehensively assess tools in this subspecialty. PURPOSE To map the evolution and current state of trauma radiology CAD tools along key dimensions of technology readiness. METHODS Following a search of databases, abstract screening, and full-text document review, CAD tool maturity was charted using elements of data curation, performance validation, outcomes research, explainability, user acceptance, and funding patterns. Descriptive statistics were used to illustrate key trends. RESULTS A total of 4052 records were screened, and 233 full-text articles were selected for content analysis. Twenty-one papers described FDA-approved commercial tools, and 212 reported algorithm prototypes. Works ranged from foundational research to multi-reader multi-case trials with heterogeneous external data. Scalable convolutional neural network-based implementations increased steeply after 2016 and were used in all commercial products; however, options for explainability were narrow. Of FDA-approved tools, 9/10 performed detection tasks. Dataset sizes ranged from < 100 to > 500,000 patients, and commercialization coincided with public dataset availability. Cross-sectional torso datasets were uniformly small. Data curation methods with ground truth labeling by independent readers were uncommon. No papers assessed user acceptance, and no method included human-computer interaction. The USA and China had the highest research output and frequency of research funding. CONCLUSIONS Trauma imaging CAD tools are likely to improve patient care but are currently in an early stage of maturity, with few FDA-approved products for a limited number of uses. The scarcity of high-quality annotated data remains a major barrier.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| | - Pedro V Staziaki
- Cardiothoracic Imaging, Department of Radiology, Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | - Garvit D Khatri
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Nicholas M Beckmann
- Memorial Hermann Orthopedic & Spine Hospital, McGovern Medical School at UTHealth, Houston, TX, USA
| | - Zhaoyong Feng
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Zachary S Delproposto
- Division of Emergency Radiology, Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - J Stephen Spann
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Nathan Sarkar
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yunting Fu
- Health Sciences and Human Services Library, University of Maryland, Baltimore, Baltimore, MD, USA
| |
Collapse
|