1
|
Ventura CAI, Denton EE, David JA. Artificial Intelligence in Emergency Trauma Care: A Preliminary Scoping Review. MEDICAL DEVICES-EVIDENCE AND RESEARCH 2024; 17:191-211. [PMID: 38803707 PMCID: PMC11129754 DOI: 10.2147/mder.s467146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/17/2024] [Indexed: 05/29/2024] Open
Abstract
This study aimed to analyze the use of generative artificial intelligence in the emergency trauma care setting through a brief scoping review of literature published between 2014 and 2024. An exploration of the NCBI repository was performed using a search string of selected keywords that returned N=87 results; articles that met the inclusion criteria (n=28) were reviewed and analyzed. Heterogeneity sources were explored and identified by a significance threshold of P < 0.10 or an I2 value exceeding 50%. If applicable, articles were categorized within three primary domains: triage, diagnostics, or treatment. Findings suggest that CNNs demonstrate strong diagnostic performance for diverse traumatic injuries, but generalized integration requires expanded prospective multi-center validation. Injury scoring models currently experience calibration gaps in mortality quantification and lesion localization that can undermine clinical utility by permitting false negatives. Triage predictive models now confront transparency, explainability, and healthcare ecosystem integration barriers limiting real-world translation. The most significant literature gap centers on treatment-oriented generative AI applications that provide real-time guidance for urgent trauma interventions rather than just analytical support.
Collapse
Affiliation(s)
- Christian Angelo I Ventura
- Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD USA; Department of Allied Health, Baltimore City Community College, Baltimore, MD, USA
| | - Edward E Denton
- Department of Emergency Medicine, University of Arkansas for Medical Sciences, Little Rock, AR USA; Fay W. Boozman College of Public Health, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Jessica A David
- Department of Biochemistry and Microbiology, Rutgers University, New Brunswick, NJ, USA
| |
Collapse
|
2
|
Yi PH, Garner HW, Hirschmann A, Jacobson JA, Omoumi P, Oh K, Zech JR, Lee YH. Clinical Applications, Challenges, and Recommendations for Artificial Intelligence in Musculoskeletal and Soft-Tissue Ultrasound: AJR Expert Panel Narrative Review. AJR Am J Roentgenol 2024; 222:e2329530. [PMID: 37436032 DOI: 10.2214/ajr.23.29530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Artificial intelligence (AI) is increasingly used in clinical practice for musculoskeletal imaging tasks, such as disease diagnosis and image reconstruction. AI applications in musculoskeletal imaging have focused primarily on radiography, CT, and MRI. Although musculoskeletal ultrasound stands to benefit from AI in similar ways, such applications have been relatively underdeveloped. In comparison with other modalities, ultrasound has unique advantages and disadvantages that must be considered in AI algorithm development and clinical translation. Challenges in developing AI for musculoskeletal ultrasound involve both clinical aspects of image acquisition and practical limitations in image processing and annotation. Solutions from other radiology subspecialties (e.g., crowdsourced annotations coordinated by professional societies), along with use cases (most commonly rotator cuff tendon tears and palpable soft-tissue masses), can be applied to musculoskeletal ultrasound to help develop AI. To facilitate creation of high-quality imaging datasets for AI model development, technologists and radiologists should focus on increasing uniformity in musculoskeletal ultrasound performance and increasing annotations of images for specific anatomic regions. This Expert Panel Narrative Review summarizes available evidence regarding AI's potential utility in musculoskeletal ultrasound and challenges facing its development. Recommendations for future AI advancement and clinical translation in musculoskeletal ultrasound are discussed.
Collapse
Affiliation(s)
- Paul H Yi
- University of Maryland Medical Intelligent Imaging Center, University of Maryland School of Medicine, Baltimore, MD
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD
| | | | - Anna Hirschmann
- Imamed Radiology Nordwest, Basel, Switzerland
- Department of Radiology, University of Basel, Basel, Switzerland
| | - Jon A Jacobson
- Lenox Hill Radiology, New York, NY
- Department of Radiology, University of California, San Diego Medical Center, San Diego, CA
| | - Patrick Omoumi
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
- Department of Radiology, University of Lausanne, Lausanne, Switzerland
| | - Kangrok Oh
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| | - John R Zech
- Department of Radiology, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| |
Collapse
|
3
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
4
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
5
|
Lu X, Chang EY, Du J, Yan A, McAuley J, Gentili A, Hsu CN. Robust Multi-View Fracture Detection in the Presence of Other Abnormalities Using HAMIL-Net. Mil Med 2023; 188:590-597. [PMID: 37948284 DOI: 10.1093/milmed/usad252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/31/2023] [Accepted: 06/26/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Foot and ankle fractures are the most common military health problem. Automated diagnosis can save time and personnel. It is crucial to distinguish fractures not only from normal healthy cases, but also robust against the presence of other orthopedic pathologies. Artificial intelligence (AI) deep learning has been shown to be promising. Previously, we have developed HAMIL-Net to automatically detect orthopedic injuries for upper extremity injuries. In this research, we investigated the performance of HAMIL-Net for detecting foot and ankle fractures in the presence of other abnormalities. MATERIALS AND METHODS HAMIL-Net is a novel deep neural network consisting of a hierarchical attention layer followed by a multiple-instance learning layer. The design allowed it to deal with imaging studies with multiple views. We used 148K musculoskeletal imaging studies for 51K Veterans at VA San Diego in the past 20 years to create datasets for this research. We annotated each study by a semi-automated pipeline leveraging radiology reports written by board-certified radiologists and extracting findings with a natural language processing tool and manually validated the annotations. RESULTS HAMIL-Net can be trained with study-level, multiple-view examples, and detect foot and ankle fractures with a 0.87 area under the receiver operational curve, but the performance dropped when tested by cases including other abnormalities. By integrating a fracture specialized model with one that detecting a broad range of abnormalities, HAMIL-Net's accuracy of detecting any abnormality improved from 0.53 to 0.77 and F-score from 0.46 to 0.86. We also reported HAMIL-Net's performance under different study types including for young (age 18-35) patients. CONCLUSIONS Automated fracture detection is promising but to be deployed in clinical use, presence of other abnormalities must be considered to deliver its full benefit. Our results with HAMIL-Net showed that considering other abnormalities improved fracture detection and allowed for incidental findings of other musculoskeletal abnormalities pertinent or superimposed on fractures.
Collapse
Affiliation(s)
- Xing Lu
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Eric Y Chang
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
| | - Jiang Du
- University of California, San Diego, La Jolla, CA 92093, USA
| | - An Yan
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Julian McAuley
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Amilcare Gentili
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
| | - Chun-Nan Hsu
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
- VA National Artificial Intelligence Institute, Washington, DC 20422, USA
| |
Collapse
|
6
|
Zech JR, Jaramillo D, Altosaar J, Popkin CA, Wong TT. Artificial intelligence to identify fractures on pediatric and young adult upper extremity radiographs. Pediatr Radiol 2023; 53:2386-2397. [PMID: 37740031 DOI: 10.1007/s00247-023-05754-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/09/2023] [Accepted: 08/21/2023] [Indexed: 09/24/2023]
Abstract
BACKGROUND Pediatric fractures are challenging to identify given the different response of the pediatric skeleton to injury compared to adults, and most artificial intelligence (AI) fracture detection work has focused on adults. OBJECTIVE Develop and transparently share an AI model capable of detecting a range of pediatric upper extremity fractures. MATERIALS AND METHODS In total, 58,846 upper extremity radiographs (finger/hand, wrist/forearm, elbow, humerus, shoulder/clavicle) from 14,873 pediatric and young adult patients were divided into train (n = 12,232 patients), tune (n = 1,307), internal test (n = 819), and external test (n = 515) splits. Fracture was determined by manual inspection of all test radiographs and the subset of train/tune radiographs whose reports were classified fracture-positive by a rule-based natural language processing (NLP) algorithm. We trained an object detection model (Faster Region-based Convolutional Neural Network [R-CNN]; "strongly-supervised") and an image classification model (EfficientNetV2-Small; "weakly-supervised") to detect fractures using train/tune data and evaluate on test data. AI fracture detection accuracy was compared with accuracy of on-call residents on cases they preliminarily interpreted overnight. RESULTS A strongly-supervised fracture detection AI model achieved overall test area under the receiver operating characteristic curve (AUC) of 0.96 (95% CI 0.95-0.97), accuracy 89.7% (95% CI 88.0-91.3%), sensitivity 90.8% (95% CI 88.5-93.1%), and specificity 88.7% (95% CI 86.4-91.0%), and outperformed a weakly-supervised model (AUC 0.93, 95% CI 0.92-0.94, P < 0.0001). AI accuracy on cases preliminary interpreted overnight was higher than resident accuracy (AI 89.4% vs. 85.1%, 95% CI 87.3-91.5% vs. 82.7-87.5%, P = 0.01). CONCLUSION An object detection AI model identified pediatric upper extremity fractures with high accuracy.
Collapse
Affiliation(s)
- John R Zech
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA.
| | - Diego Jaramillo
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA
| | | | - Charles A Popkin
- Department of Orthopedic Surgery, Columbia University Irving Medical Center, New York, NY, USA
| | - Tony T Wong
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA
| |
Collapse
|
7
|
Forghani R. A Practical Guide for AI Algorithm Selection for the Radiology Department. Semin Roentgenol 2023; 58:208-213. [PMID: 37087142 DOI: 10.1053/j.ro.2023.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 02/19/2023] [Indexed: 04/03/2023]
Abstract
There is a steadily increasing number of artificial intelligence (AI) tools available and cleared for use in clinical radiological practice. Radiologists will increasingly be faced with options provided by other radiologist colleagues, clinician colleagues, vendors, or other professionals for obtaining and deploying AI algorithms in clinical practice. It is important that radiologists are familiar with basic and practical aspects that need to be considered when assessing an AI tool for use in their practice, so that resources are properly allocated and there is an appropriate return on investment through enhancements in patient quality of care, safety, and/or process efficiency. In this review, we will discuss a potential approach for AI software assessment and practical points that should be considered when considering the acquisition and deployment of an AI tool in the radiology department.
Collapse
|
8
|
Ye P, Li S, Wang Z, Tian S, Luo Y, Wu Z, Zhuang Y, Zhang Y, Grzegorzek M, Hou Z. Development and validation of a deep learning-based model to distinguish acetabular fractures on pelvic anteroposterior radiographs. Front Physiol 2023; 14:1146910. [PMID: 37187961 PMCID: PMC10176114 DOI: 10.3389/fphys.2023.1146910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Objective: To develop and test a deep learning (DL) model to distinguish acetabular fractures (AFs) on pelvic anteroposterior radiographs (PARs) and compare its performance to that of clinicians. Materials and methods: A total of 1,120 patients from a big level-I trauma center were enrolled and allocated at a 3:1 ratio for the DL model's development and internal test. Another 86 patients from two independent hospitals were collected for external validation. A DL model for identifying AFs was constructed based on DenseNet. AFs were classified into types A, B, and C according to the three-column classification theory. Ten clinicians were recruited for AF detection. A potential misdiagnosed case (PMC) was defined based on clinicians' detection results. The detection performance of the clinicians and DL model were evaluated and compared. The detection performance of different subtypes using DL was assessed using the area under the receiver operating characteristic curve (AUC). Results: The means of 10 clinicians' sensitivity, specificity, and accuracy to identify AFs were 0.750/0.735, 0.909/0.909, and 0.829/0.822, in the internal test/external validation set, respectively. The sensitivity, specificity, and accuracy of the DL detection model were 0.926/0.872, 0.978/0.988, and 0.952/0.930, respectively. The DL model identified type A fractures with an AUC of 0.963 [95% confidence interval (CI): 0.927-0.985]/0.950 (95% CI: 0.867-0.989); type B fractures with an AUC of 0.991 (95% CI: 0.967-0.999)/0.989 (95% CI: 0.930-1.000); and type C fractures with an AUC of 1.000 (95% CI: 0.975-1.000)/1.000 (95% CI: 0.897-1.000) in the test/validation set. The DL model correctly recognized 56.5% (26/46) of PMCs. Conclusion: A DL model for distinguishing AFs on PARs is feasible. In this study, the DL model achieved a diagnostic performance comparable to or even superior to that of clinicians.
Collapse
Affiliation(s)
- Pengyu Ye
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Sihe Li
- University of Lübeck, Lübeck, Schleswig-Holstein, Germany
| | - Zhongzheng Wang
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Siyu Tian
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Yi Luo
- Heidelberg University, Heidelberg, Baden-Württemberg, Germany
| | - Zhanyong Wu
- Orthopedic Hospital of Xingtai, Xingtai, China
| | - Yan Zhuang
- Xi’an Honghui Hospital, Xi’an, Shaanxi, China
| | - Yingze Zhang
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | | | - Zhiyong Hou
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
- *Correspondence: Zhiyong Hou,
| |
Collapse
|
9
|
Artificial Intelligence in Orthopedic Radiography Analysis: A Narrative Review. Diagnostics (Basel) 2022; 12:diagnostics12092235. [PMID: 36140636 PMCID: PMC9498096 DOI: 10.3390/diagnostics12092235] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence (AI) in medicine is a rapidly growing field. In orthopedics, the clinical implementations of AI have not yet reached their full potential. Deep learning algorithms have shown promising results in computed radiographs for fracture detection, classification of OA, bone age, as well as automated measurements of the lower extremities. Studies investigating the performance of AI compared to trained human readers often show equal or better results, although human validation is indispensable at the current standards. The objective of this narrative review is to give an overview of AI in medicine and summarize the current applications of AI in orthopedic radiography imaging. Due to the different AI software and study design, it is difficult to find a clear structure in this field. To produce more homogeneous studies, open-source access to AI software codes and a consensus on study design should be aimed for.
Collapse
|