1
|
Petrella RJ. The AI Future of Emergency Medicine. Ann Emerg Med 2024:S0196-0644(24)00043-X. [PMID: 38795081 DOI: 10.1016/j.annemergmed.2024.01.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 05/27/2024]
Abstract
In the coming years, artificial intelligence (AI) and machine learning will likely give rise to profound changes in the field of emergency medicine, and medicine more broadly. This article discusses these anticipated changes in terms of 3 overlapping yet distinct stages of AI development. It reviews some fundamental concepts in AI and explores their relation to clinical practice, with a focus on emergency medicine. In addition, it describes some of the applications of AI in disease diagnosis, prognosis, and treatment, as well as some of the practical issues that they raise, the barriers to their implementation, and some of the legal and regulatory challenges they create.
Collapse
Affiliation(s)
- Robert J Petrella
- Emergency Departments, CharterCARE Health Partners, Providence and North Providence, RI; Emergency Department, Boston VA Medical Center, Boston, MA; Emergency Departments, Steward Health Care System, Boston and Methuen, MA; Harvard Medical School, Boston, MA; Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA; Department of Medicine, Brigham and Women's Hospital, Boston, MA.
| |
Collapse
|
2
|
Vande Vyvere T, Pisică D, Wilms G, Claes L, Van Dyck P, Snoeckx A, van den Hauwe L, Pullens P, Verheyden J, Wintermark M, Dekeyzer S, Mac Donald CL, Maas AIR, Parizel PM. Imaging Findings in Acute Traumatic Brain Injury: a National Institute of Neurological Disorders and Stroke Common Data Element-Based Pictorial Review and Analysis of Over 4000 Admission Brain Computed Tomography Scans from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) Study. J Neurotrauma 2024. [PMID: 38482818 DOI: 10.1089/neu.2023.0553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024] Open
Abstract
In 2010, the National Institute of Neurological Disorders and Stroke (NINDS) created a set of common data elements (CDEs) to help standardize the assessment and reporting of imaging findings in traumatic brain injury (TBI). However, as opposed to other standardized radiology reporting systems, a visual overview and data to support the proposed standardized lexicon are lacking. We used over 4000 admission computed tomography (CT) scans of patients with TBI from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study to develop an extensive pictorial overview of the NINDS TBI CDEs, with visual examples and background information on individual pathoanatomical lesion types, up to the level of supplemental and emerging information (e.g., location and estimated volumes). We documented the frequency of lesion occurrence, aiming to quantify the relative importance of different CDEs for characterizing TBI, and performed a critical appraisal of our experience with the intent to inform updating of the CDEs. In addition, we investigated the co-occurrence and clustering of lesion types and the distribution of six CT classification systems. The median age of the 4087 patients in our dataset was 50 years (interquartile range, 29-66; range, 0-96), including 238 patients under 18 years old (5.8%). Traumatic subarachnoid hemorrhage (45.3%), skull fractures (37.4%), contusions (31.3%), and acute subdural hematoma (28.9%) were the most frequently occurring CT findings in acute TBI. The ranking of these lesions was the same in patients with mild TBI (baseline Glasgow Coma Scale [GCS] score 13-15) compared with those with moderate-severe TBI (baseline GCS score 3-12), but the frequency of occurrence was up to three times higher in moderate-severe TBI. In most TBI patients with CT abnormalities, there was co-occurrence and clustering of different lesion types, with significant differences between mild and moderate-severe TBI patients. More specifically, lesion patterns were more complex in moderate-severe TBI patients, with more co-existing lesions and more frequent signs of mass effect. These patients also had higher and more heterogeneous CT score distributions, associated with worse predicted outcomes. The critical appraisal of the NINDS CDEs was highly positive, but revealed that full assessment can be time consuming, that some CDEs had very low frequencies, and identified a few redundancies and ambiguity in some definitions. Whilst primarily developed for research, implementation of CDE templates for use in clinical practice is advocated, but this will require development of an abbreviated version. In conclusion, with this study, we provide an educational resource for clinicians and researchers to help assess, characterize, and report the vast and complex spectrum of imaging findings in patients with TBI. Our data provides a comprehensive overview of the contemporary landscape of TBI imaging pathology in Europe, and the findings can serve as empirical evidence for updating the current NINDS radiologic CDEs to version 3.0.
Collapse
Affiliation(s)
- Thijs Vande Vyvere
- Department of Radiology, Antwerp University Hospital, Antwerp, Belgium
- Department of Molecular Imaging and Radiology (MIRA), Faculty of Medicine and Health Science, University of Antwerp, Antwerp, Belgium
| | - Dana Pisică
- Department of Neurosurgery, Erasmus MC - University Medical Center Rotterdam, Rotterdam, the Netherlands
- Department of Public Health, Erasmus MC - University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Guido Wilms
- Department of Radiology, University Hospitals Leuven, Leuven, Belgium
| | - Lene Claes
- icometrix, Research and Development, Leuven, Belgium
| | - Pieter Van Dyck
- Department of Radiology, Antwerp University Hospital, Antwerp, Belgium
- Department of Molecular Imaging and Radiology (MIRA), Faculty of Medicine and Health Science, University of Antwerp, Antwerp, Belgium
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Antwerp, Belgium
- Department of Molecular Imaging and Radiology (MIRA), Faculty of Medicine and Health Science, University of Antwerp, Antwerp, Belgium
| | - Luc van den Hauwe
- Department of Radiology, Antwerp University Hospital, Antwerp, Belgium
| | - Pim Pullens
- Department of Imaging, University Hospital Ghent; IBITech/MEDISIP, Engineering and Architecture, Ghent University; Ghent Institute for Functional and Metabolic Imaging, Ghent University, Belgium
| | - Jan Verheyden
- icometrix, Research and Development, Leuven, Belgium
| | - Max Wintermark
- Department of Neuroradiology, University of Texas MD Anderson Center, Houston, Texas, USA
| | - Sven Dekeyzer
- Department of Radiology, Antwerp University Hospital, Antwerp, Belgium
- Department of Radiology, University Hospital Ghent, Belgium
| | - Christine L Mac Donald
- Department of Neurological Surgery, School of Medicine, Harborview Medical Center, Seattle, Washington, USA
- Department of Neurological Surgery, School of Medicine, University of Washington, Seattle, Washington, USA
| | - Andrew I R Maas
- Department of Neurosurgery, Antwerp University Hospital, Antwerp, Belgium
- Department of Translational Neuroscience, Faculty of Medicine and Health Science, University of Antwerp, Antwerp, Belgium
| | - Paul M Parizel
- Department of Radiology, Royal Perth Hospital (RPH) and University of Western Australia (UWA), Perth, Australia; Western Australia National Imaging Facility (WA NIF) node, Australia
| |
Collapse
|
3
|
Davis MA, Wu O, Ikuta I, Jordan JE, Johnson MH, Quigley E. Understanding Bias in Artificial Intelligence: A Practice Perspective. AJNR Am J Neuroradiol 2024; 45:371-373. [PMID: 38123951 DOI: 10.3174/ajnr.a8070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/17/2023] [Indexed: 12/23/2023]
Abstract
In the fall of 2021, several experts in this space delivered a Webinar hosted by the American Society of Neuroradiology (ASNR) Diversity and Inclusion Committee, focused on expanding the understanding of bias in artificial intelligence, with a health equity lens, and provided key concepts for neuroradiologists to approach the evaluation of these tools. In this perspective, we distill key parts of this discussion, including understanding why this topic is important to neuroradiologists and lending insight on how neuroradiologists can develop a framework to assess health equity-related bias in artificial intelligence tools. In addition, we provide examples of clinical workflow implementation of these tools so that we can begin to see how artificial intelligence tools will impact discourse on equitable radiologic care. As continuous learners, we must be engaged in new and rapidly evolving technologies that emerge in our field. The Diversity and Inclusion Committee of the ASNR has addressed this subject matter through its programming content revolving around health equity in neuroradiologic advances.
Collapse
Affiliation(s)
- Melissa A Davis
- From Yale University (M.A.D., M.H.J.), New Haven, Connecticut
| | - Ona Wu
- Massachusetts General Hospital (O.W.), Charlestown, Massachusetts
| | - Ichiro Ikuta
- Mayo Clinic Arizona, Department of Radiology (I.I.), Phoenix, Arizona
| | - John E Jordan
- Stanford University School of Medicine (J.E.J.), Stanford, California
| | | | | |
Collapse
|
4
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
5
|
Puzio T, Matera K, Wiśniewski K, Grobelna M, Wanibuchi S, Jaskólski DJ, Bobeff EJ. Automated volumetric evaluation of intracranial compartments and cerebrospinal fluid distribution on emergency trauma head CT scans to quantify mass effect. Front Neurosci 2024; 18:1341734. [PMID: 38445256 PMCID: PMC10913188 DOI: 10.3389/fnins.2024.1341734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Background Intracranial space is divided into three compartments by the falx cerebri and tentorium cerebelli. We assessed whether cerebrospinal fluid (CSF) distribution evaluated by a specifically developed deep-learning neural network (DLNN) could assist in quantifying mass effect. Methods Head trauma CT scans from a high-volume emergency department between 2018 and 2020 were retrospectively analyzed. Manual segmentations of intracranial compartments and CSF served as the ground truth to develop a DLNN model to automate the segmentation process. Dice Similarity Coefficient (DSC) was used to evaluate the segmentation performance. Supratentorial CSF Ratio was calculated by dividing the volume of CSF on the side with reduced CSF reserve by the volume of CSF on the opposite side. Results Two hundred and seventy-four patients (mean age, 61 years ± 18.6) after traumatic brain injury (TBI) who had an emergency head CT scan were included. The average DSC for training and validation datasets were respectively: 0.782 and 0.765. Lower DSC were observed in the segmentation of CSF, respectively 0.589, 0.615, and 0.572 for the right supratentorial, left supratentorial, and infratentorial CSF regions in the training dataset, and slightly lower values in the validation dataset, respectively 0.567, 0.574, and 0.556. Twenty-two patients (8%) had midline shift exceeding 5 mm, and 24 (8.8%) presented with high/mixed density lesion exceeding >25 ml. Fifty-five patients (20.1%) exhibited mass effect requiring neurosurgical treatment. They had lower supratentorial CSF volume and lower Supratentorial CSF Ratio (both p < 0.001). A Supratentorial CSF Ratio below 60% had a sensitivity of 74.5% and specificity of 87.7% (AUC 0.88, 95%CI 0.82-0.94) in identifying patients that require neurosurgical treatment for mass effect. On the other hand, patients with CSF constituting 10-20% of the intracranial space, with 80-90% of CSF specifically in the supratentorial compartment, and whose Supratentorial CSF Ratio exceeded 80% had minimal risk. Conclusion CSF distribution may be presented as quantifiable ratios that help to predict surgery in patients after TBI. Automated segmentation of intracranial compartments using the DLNN model demonstrates a potential of artificial intelligence in quantifying mass effect. Further validation of the described method is necessary to confirm its efficacy in triaging patients and identifying those who require neurosurgical treatment.
Collapse
Affiliation(s)
- Tomasz Puzio
- Department of Diagnostic Imaging, Polish Mothers' Memorial Hospital Research Institute, Łódź, Poland
| | - Katarzyna Matera
- Department of Diagnostic Imaging, Polish Mothers' Memorial Hospital Research Institute, Łódź, Poland
| | - Karol Wiśniewski
- Department of Neurosurgery and Neuro-Oncology, Barlicki University Hospital, Medical University of Lodz, Łódź, Poland
| | | | - Sora Wanibuchi
- Department of Neurosurgery and Neuro-Oncology, Barlicki University Hospital, Medical University of Lodz, Łódź, Poland
- Department of Anatomy, Aichi Medical University, Nagakute, Aichi, Japan
| | - Dariusz J. Jaskólski
- Department of Neurosurgery and Neuro-Oncology, Barlicki University Hospital, Medical University of Lodz, Łódź, Poland
| | - Ernest J. Bobeff
- Department of Neurosurgery and Neuro-Oncology, Barlicki University Hospital, Medical University of Lodz, Łódź, Poland
- Department of Sleep Medicine and Metabolic Disorders, Medical University of Lodz, Łódź, Poland
| |
Collapse
|
6
|
Law W, Terzic A, Chaim J, Erinjeri JP, Hricak H, Vargas HA, Becker AS. Integrated Automatic Examination Assignment Reduces Radiologist Interruptions: A 2-Year Cohort Study of 232,022 Examinations. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:25-30. [PMID: 38343207 PMCID: PMC10976913 DOI: 10.1007/s10278-023-00917-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 03/02/2024]
Abstract
Radiology departments face challenges in delivering timely and accurate imaging reports, especially in high-volume, subspecialized settings. In this retrospective cohort study at a tertiary cancer center, we assessed the efficacy of an Automatic Assignment System (AAS) in improving radiology workflow efficiency by analyzing 232,022 CT examinations over a 12-month period post-implementation and compared it to a historical control period. The AAS was integrated with the hospital-wide scheduling system and set up to automatically prioritize and distribute unreported CT examinations to available radiologists based on upcoming patient appointments, coupled with an email notification system. Following this AAS implementation, despite a 9% rise in CT volume, coupled with a concurrent 8% increase in the number of available radiologists, the mean daily urgent radiology report requests (URR) significantly decreased by 60% (25 ± 12 to 10 ± 5, t = -17.6, p < 0.001), and URR during peak days (95th quantile) was reduced by 52.2% from 46 to 22 requests. Additionally, the mean turnaround time (TAT) for reporting was significantly reduced by 440 min for patients without immediate appointments and by 86 min for those with same-day appointments. Lastly, patient waiting time sampled in one of the outpatient clinics was not negatively affected. These results demonstrate that AAS can substantially decrease workflow interruptions and improve reporting efficiency.
Collapse
Affiliation(s)
- Wyanne Law
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Admir Terzic
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Joshua Chaim
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Joseph P Erinjeri
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Hedvig Hricak
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Hebert Alberto Vargas
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Department of Radiology, Oncologic Imaging Division, NYU Langone, New York, NY, USA
| | - Anton S Becker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
- Department of Radiology, Oncologic Imaging Division, NYU Langone, New York, NY, USA.
| |
Collapse
|
7
|
Maghami M, Sattari SA, Tahmasbi M, Panahi P, Mozafari J, Shirbandi K. Diagnostic test accuracy of machine learning algorithms for the detection intracranial hemorrhage: a systematic review and meta-analysis study. Biomed Eng Online 2023; 22:114. [PMID: 38049809 PMCID: PMC10694901 DOI: 10.1186/s12938-023-01172-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
BACKGROUND This systematic review and meta-analysis were conducted to objectively evaluate the evidence of machine learning (ML) in the patient diagnosis of Intracranial Hemorrhage (ICH) on computed tomography (CT) scans. METHODS Until May 2023, systematic searches were conducted in ISI Web of Science, PubMed, Scopus, Cochrane Library, IEEE Xplore Digital Library, CINAHL, Science Direct, PROSPERO, and EMBASE for studies that evaluated the diagnostic precision of ML model-assisted ICH detection. Patients with and without ICH as the target condition who were receiving CT-Scan were eligible for the research, which used ML algorithms based on radiologists' reports as the gold reference standard. For meta-analysis, pooled sensitivities, specificities, and a summary receiver operating characteristics curve (SROC) were used. RESULTS At last, after screening the title, abstract, and full paper, twenty-six retrospective and three prospective, and two retrospective/prospective studies were included. The overall (Diagnostic Test Accuracy) DTA of retrospective studies with a pooled sensitivity was 0.917 (95% CI 0.88-0.943, I2 = 99%). The pooled specificity was 0.945 (95% CI 0.918-0.964, I2 = 100%). The pooled diagnostic odds ratio (DOR) was 219.47 (95% CI 104.78-459.66, I2 = 100%). These results were significant for the specificity of the different network architecture models (p-value = 0.0289). However, the results for sensitivity (p-value = 0.6417) and DOR (p-value = 0.2187) were not significant. The ResNet algorithm has higher pooled specificity than other algorithms with 0.935 (95% CI 0.854-0.973, I2 = 93%). CONCLUSION This meta-analysis on DTA of ML algorithms for detecting ICH by assessing non-contrast CT-Scans shows the ML has an acceptable performance in diagnosing ICH. Using ResNet in ICH detection remains promising prediction was improved via training in an Architecture Learning Network (ALN).
Collapse
Affiliation(s)
- Masoud Maghami
- Medical Doctor (MD), School of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Shahab Aldin Sattari
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Marziyeh Tahmasbi
- Department of Medical Imaging and Radiation Sciences, School of Allied Medical Sciences, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Pegah Panahi
- Medical Doctor (MD), School of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Javad Mozafari
- Department of Emergency Medicine, School of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
- Department of Radiology, Resident (MD), EUREGIO-KLINIK Albert-Schweitzer-Straße GmbH, Nordhorn, Germany
| | | |
Collapse
|
8
|
Becker AS, Das JP, Woo S, Perez-Johnston R, Vargas HA. Improving Radiology Oncologic Imaging Trainee Case Diversity through Automatic Examination Assignment: Retrospective Study from a Tertiary Cancer Center. Radiol Imaging Cancer 2023; 5:e230035. [PMID: 37889137 DOI: 10.1148/rycan.230035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
In a retrospective single-center study, the authors assessed the efficacy of an automated imaging examination assignment system for enhancing the diversity of subspecialty examinations reported by oncologic imaging fellows. The study aimed to mitigate traditional biases of manual case selection and ensure equitable exposure to various case types. Methods included evaluating the proportion of "uncommon" to "common" cases reported by fellows before and after system implementation and measuring the weekly Shannon Diversity Index to determine case distribution equity. The proportion of reported uncommon cases more than doubled from 8.6% to 17.7% in total, at the cost of a concurrent 9.0% decrease in common cases from 91.3% to 82.3%. The weekly Shannon Diversity Index per fellow increased significantly from 0.66 (95% CI: 0.65, 0.67) to 0.74 (95% CI: 0.72, 0.75; P < .001), confirming a more balanced case distribution among fellows after introduction of the automatic assignment. © RSNA, 2023 Keywords: Computer Applications, Education, Fellows, Informatics, MRI, Oncologic Imaging.
Collapse
Affiliation(s)
- Anton S Becker
- From the Department of Radiology, Body Imaging Service, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (A.S.B., J.P.D., S.W., R.P.J., H.A.V.); and Department of Radiology, NYU Langone Medical Center, New York, NY (A.S.B., S.W., H.A.V.)
| | - Jeeban P Das
- From the Department of Radiology, Body Imaging Service, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (A.S.B., J.P.D., S.W., R.P.J., H.A.V.); and Department of Radiology, NYU Langone Medical Center, New York, NY (A.S.B., S.W., H.A.V.)
| | - Sungmin Woo
- From the Department of Radiology, Body Imaging Service, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (A.S.B., J.P.D., S.W., R.P.J., H.A.V.); and Department of Radiology, NYU Langone Medical Center, New York, NY (A.S.B., S.W., H.A.V.)
| | - Rocio Perez-Johnston
- From the Department of Radiology, Body Imaging Service, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (A.S.B., J.P.D., S.W., R.P.J., H.A.V.); and Department of Radiology, NYU Langone Medical Center, New York, NY (A.S.B., S.W., H.A.V.)
| | - Hebert Alberto Vargas
- From the Department of Radiology, Body Imaging Service, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (A.S.B., J.P.D., S.W., R.P.J., H.A.V.); and Department of Radiology, NYU Langone Medical Center, New York, NY (A.S.B., S.W., H.A.V.)
| |
Collapse
|
9
|
Rothenberg SA, Savage CH, Abou Elkassem A, Singh S, Abozeed M, Hamki O, Junck K, Tridandapani S, Li M, Li Y, Smith AD. Prospective Evaluation of AI Triage of Pulmonary Emboli on CT Pulmonary Angiograms. Radiology 2023; 309:e230702. [PMID: 37787676 DOI: 10.1148/radiol.230702] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Background Artificial intelligence (AI) algorithms have shown high accuracy for detection of pulmonary embolism (PE) on CT pulmonary angiography (CTPA) studies in academic studies. Purpose To determine whether use of an AI triage system to detect PE on CTPA studies improves radiologist performance or examination and report turnaround times in a clinical setting. Materials and Methods This prospective single-center study included adult participants who underwent CTPA for suspected PE in a clinical practice setting. Consecutive CTPA studies were evaluated in two phases, first by radiologists alone (n = 31) (May 2021 to June 2021) and then by radiologists aided by a commercially available AI triage system (n = 37) (September 2021 to December 2021). Sixty-two percent of radiologists (26 of 42 radiologists) interpreted studies in both phases. The reference standard was determined by an independent re-review of studies by thoracic radiologists and was used to calculate performance metrics. Diagnostic accuracy and turnaround times were compared using Pearson χ2 and Wilcoxon rank sum tests. Results Phases 1 and 2 included 503 studies (participant mean age, 54.0 years ± 17.8 [SD]; 275 female, 228 male) and 1023 studies (participant mean age, 55.1 years ± 17.5; 583 female, 440 male), respectively. In phases 1 and 2, 14.5% (73 of 503) and 15.9% (163 of 1023) of CTPA studies were positive for PE (P = .47). Mean wait time for positive PE studies decreased from 21.5 minutes without AI to 11.3 minutes with AI (P < .001). The accuracy and miss rate, respectively, for radiologist detection of any PE on CTPA studies was 97.6% and 12.3% without AI and 98.6% and 6.1% with AI, which was not significantly different (P = .15 and P = .11, respectively). Conclusion The use of an AI triage system to detect any PE on CTPA studies improved wait times but did not improve radiologist accuracy, miss rate, or examination and report turnaround times. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Murphy and Tee in this issue.
Collapse
Affiliation(s)
- Steven A Rothenberg
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Cody H Savage
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Asser Abou Elkassem
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Satinder Singh
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Mostafa Abozeed
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Omar Hamki
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Kevin Junck
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Srini Tridandapani
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Mei Li
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Yufeng Li
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| | - Andrew D Smith
- From the Department of Radiology, University of Alabama at Birmingham, 619 S 19th St, Birmingham, AL 35233
| |
Collapse
|
10
|
Batra K, Xi Y, Bhagwat S, Espino A, Peshock RM. Radiologist Worklist Reprioritization Using Artificial Intelligence: Impact on Report Turnaround Times for CTPA Examinations Positive for Acute Pulmonary Embolism. AJR Am J Roentgenol 2023; 221:324-333. [PMID: 37095668 DOI: 10.2214/ajr.22.28949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
BACKGROUND. In patients with acute pulmonary embolism (PE), timely intervention (e.g., initiation of anticoagulation) is critical for optimizing clinical outcomes. OBJECTIVE. The purpose of this study was to evaluate the effect of artificial intelligence (AI)-based radiologist worklist reprioritization on report turnaround times for pulmonary CTA (CTPA) examinations positive for acute PE. METHODS. This retrospective single-center study included patients who underwent CTPA before (October 1, 2018-March 31, 2019 [pre-AI period]) and after (October 1, 2019-March 31, 2020 [post-AI period]) implementation of an AI tool that reprioritized CTPA examinations to the top of radiologists' reading worklists if acute PE was detected. EMR and dictation system timestamps were used to determine the wait time (time from examination completion to report initiation), read time (time from report initiation to report availability), and report turnaround time (sum of wait and read times) for the examinations. Times for reports positive for PE, with final radiology reports as reference, were compared between periods. RESULTS. The study included 2501 examinations of 2197 patients (1307 women, 890 men; mean age, 57.4 ± 17.0 [SD] years), including 1335 examinations from the pre-AI period and 1166 from the post-AI period. The frequency of acute PE, based on radiology reports, was 15.1% (201/1335) during the pre-AI period and 12.3% (144/1166) during the post-AI period. During the post-AI period, the AI tool reprioritized 12.7% (148/1166) of examinations. For PE-positive examinations, the post-AI period, compared with the pre-AI period, had significantly shorter mean report turnaround time (47.6 vs 59.9 minutes; mean difference, 12.3 minutes [95% CI, 0.6-26.0 minutes]) and mean wait time (21.4 vs 33.4 minutes; mean difference, 12.0 minutes [95% CI, 0.9-25.3 minutes]) but no significant difference in mean read time (26.3 vs 26.5 minutes; mean difference, 0.2 minutes [95% CI, -2.8 to 3.2 minutes]). During regular operational hours, wait time was significantly shorter in the post-AI than in the pre-AI period for routine-priority examinations (15.3 vs 43.7 minutes; mean difference, 28.4 minutes [95% CI, 2.2-64.7 minutes]) but not for stat- or urgent-priority examinations. CONCLUSION. AI-driven worklist reprioritization yielded reductions in report turnaround time and wait time for PE-positive CTPA examinations. CLINICAL IMPACT. By assisting radiologists in providing rapid diagnoses, the AI tool has potential for enabling earlier interventions for acute PE.
Collapse
Affiliation(s)
- Kiran Batra
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yin Xi
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Siddharth Bhagwat
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Adriana Espino
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Ronald M Peshock
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
- Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
11
|
Kotovich D, Twig G, Itsekson-Hayosh Z, Klug M, Simon AB, Yaniv G, Konen E, Tau N, Raskin D, Chang PJ, Orion D. The impact on clinical outcomes after 1 year of implementation of an artificial intelligence solution for the detection of intracranial hemorrhage. Int J Emerg Med 2023; 16:50. [PMID: 37568103 PMCID: PMC10422703 DOI: 10.1186/s12245-023-00523-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 07/17/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND To assess the effect of a commercial artificial intelligence (AI) solution implementation in the emergency department on clinical outcomes in a single level 1 trauma center. METHODS A retrospective cohort study for two time periods-pre-AI (1.1.2017-1.1.2018) and post-AI (1.1.2019-1.1.2020)-in a level 1 trauma center was performed. The ICH algorithm was applied to 587 consecutive patients with a confirmed diagnosis of ICH on head CT upon admission to the emergency department. Study variables included demographics, patient outcomes, and imaging data. Participants admitted to the emergency department during the same time periods for other acute diagnoses (ischemic stroke (IS) and myocardial infarction (MI)) served as control groups. Primary outcomes were 30- and 120-day all-cause mortality. The secondary outcome was morbidity based on Modified Rankin Scale for Neurologic Disability (mRS) at discharge. RESULTS Five hundred eighty-seven participants (289 pre-AI-age 71 ± 1, 169 men; 298 post-AI-age 69 ± 1, 187 men) with ICH were eligible for the analyzed period. Demographics, comorbidities, Emergency Severity Score, type of ICH, and length of stay were not significantly different between the two time periods. The 30- and 120-day all-cause mortality were significantly reduced in the post-AI group when compared to the pre-AI group (27.7% vs 17.5%; p = 0.004 and 31.8% vs 21.7%; p = 0.017, respectively). Modified Rankin Scale (mRS) at discharge was significantly reduced post-AI implementation (3.2 vs 2.8; p = 0.044). CONCLUSION The added value of this study emphasizes the introduction of artificial intelligence (AI) computer-aided triage and prioritization software in an emergent care setting that demonstrated a significant reduction in a 30- and 120-day all-cause mortality and morbidity for patients diagnosed with intracranial hemorrhage (ICH). Along with mortality rates, the AI software was associated with a significant reduction in the Modified Ranking Scale (mRs).
Collapse
Affiliation(s)
- Dmitry Kotovich
- The Institute for Research in Military Medicine, The Faculty of Medicine, The Hebrew University of Jerusalem, Tel Aviv, Israel.
- The IDF Medical Corps, 9112102, Tel Aviv, Israel.
| | - Gilad Twig
- The Institute for Research in Military Medicine, The Faculty of Medicine, The Hebrew University of Jerusalem, Tel Aviv, Israel
- The IDF Medical Corps, 9112102, Tel Aviv, Israel
| | - Zeev Itsekson-Hayosh
- Center of Stroke and Neurovascular Disorders, Sheba Medical Center, Tel HaShomer, Ramat Gan, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Maximiliano Klug
- Department of Diagnostic Imaging, Sheba Medical Center, Tel HaShomer, Ramat Gan, Israel, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Asaf Ben Simon
- Sackler School of Medicine, Faculty of Medicine, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Gal Yaniv
- Department of Diagnostic Imaging, Sheba Medical Center, Tel HaShomer, Ramat Gan, Israel, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, Tel HaShomer, Ramat Gan, Israel, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Noam Tau
- Department of Diagnostic Imaging, Sheba Medical Center, Tel HaShomer, Ramat Gan, Israel, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Daniel Raskin
- Department of Diagnostic Imaging, Sheba Medical Center, Tel HaShomer, Ramat Gan, Israel, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| | - Paul J Chang
- Department of Radiology, University of Chicago Medical Center, Chicago, Illinois, 60637, USA
| | - David Orion
- Center of Stroke and Neurovascular Disorders, Sheba Medical Center, Tel HaShomer, Ramat Gan, affiliated to Sackler Faculty of Medicine, Tel Aviv University, 52621, Tel Aviv, Israel
| |
Collapse
|
12
|
Kalidindi S, Gandhi S. Workforce Crisis in Radiology in the UK and the Strategies to Deal With It: Is Artificial Intelligence the Saviour? Cureus 2023; 15:e43866. [PMID: 37608900 PMCID: PMC10441819 DOI: 10.7759/cureus.43866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2023] [Indexed: 08/24/2023] Open
Abstract
Radiology has seen rapid growth over the last few decades. Technological advances in equipment and computing have resulted in an explosion of new modalities and applications. However, this rapid expansion of capability and capacity has not been matched by a parallel growth in the number of radiologists. This has resulted in global shortages in the workforce, with the UK being one of the most affected countries. The UK National Health Service has been employing several conventional strategies to deal with the workforce situation with mixed success. The emergence of artificial intelligence (AI) tools that have the potential to increase efficiency and efficacy at various stages in radiology has made it possible for radiology departments to use new strategies and workflows that can offset workforce shortages to some extent. This review article discusses the current and projected radiology workforce situation in the UK and the various strategies to deal with it, including applications of AI in radiology. We highlight the benefits of AI tools in improving efficiency and patient safety. AI has a role along the patient's entire journey from the clinician requesting the appropriate radiological investigation, safe image acquisition, alerting the radiologists and clinicians about critical and life-threatening situations, cancer screening follow up, to generating meaningful radiology reports more efficiently. It has great potential in easing the workforce crisis and needs rapid adoption by radiology departments.
Collapse
|
13
|
Angkurawaranon S, Sanorsieng N, Unsrisong K, Inkeaw P, Sripan P, Khumrin P, Angkurawaranon C, Vaniyapong T, Chitapanarux I. A comparison of performance between a deep learning model with residents for localization and classification of intracranial hemorrhage. Sci Rep 2023; 13:9975. [PMID: 37340038 DOI: 10.1038/s41598-023-37114-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/15/2023] [Indexed: 06/22/2023] Open
Abstract
Intracranial hemorrhage (ICH) from traumatic brain injury (TBI) requires prompt radiological investigation and recognition by physicians. Computed tomography (CT) scanning is the investigation of choice for TBI and has become increasingly utilized under the shortage of trained radiology personnel. It is anticipated that deep learning models will be a promising solution for the generation of timely and accurate radiology reports. Our study examines the diagnostic performance of a deep learning model and compares the performance of that with detection, localization and classification of traumatic ICHs involving radiology, emergency medicine, and neurosurgery residents. Our results demonstrate that the high level of accuracy achieved by the deep learning model, (0.89), outperforms the residents with regard to sensitivity (0.82) but still lacks behind in specificity (0.90). Overall, our study suggests that the deep learning model may serve as a potential screening tool aiding the interpretation of head CT scans among traumatic brain injury patients.
Collapse
Affiliation(s)
- Salita Angkurawaranon
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
- Global Health and Chronic Conditions Research Group, Chiang Mai, 50200, Thailand
| | - Nonn Sanorsieng
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Kittisak Unsrisong
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Papangkorn Inkeaw
- Department of Computer Science, Faculty of Science, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Patumrat Sripan
- Research Institute for Health Sciences, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Piyapong Khumrin
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Chaisiri Angkurawaranon
- Global Health and Chronic Conditions Research Group, Chiang Mai, 50200, Thailand
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Tanat Vaniyapong
- Neurosurgery Division, Department of Surgery, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand
| | - Imjai Chitapanarux
- Department of Radiology, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, Chiang Mai, 50200, Thailand.
| |
Collapse
|
14
|
Shin HJ, Han K, Ryu L, Kim EK. The impact of artificial intelligence on the reading times of radiologists for chest radiographs. NPJ Digit Med 2023; 6:82. [PMID: 37120423 PMCID: PMC10148851 DOI: 10.1038/s41746-023-00829-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 04/19/2023] [Indexed: 05/01/2023] Open
Abstract
Whether the utilization of artificial intelligence (AI) during the interpretation of chest radiographs (CXRs) would affect the radiologists' workload is of particular interest. Therefore, this prospective observational study aimed to observe how AI affected the reading times of radiologists in the daily interpretation of CXRs. Radiologists who agreed to have the reading times of their CXR interpretations collected from September to December 2021 were recruited. Reading time was defined as the duration in seconds from opening CXRs to transcribing the image by the same radiologist. As commercial AI software was integrated for all CXRs, the radiologists could refer to AI results for 2 months (AI-aided period). During the other 2 months, the radiologists were automatically blinded to the AI results (AI-unaided period). A total of 11 radiologists participated, and 18,680 CXRs were included. Total reading times were significantly shortened with AI use, compared to no use (13.3 s vs. 14.8 s, p < 0.001). When there was no abnormality detected by AI, reading times were shorter with AI use (mean 10.8 s vs. 13.1 s, p < 0.001). However, if any abnormality was detected by AI, reading times did not differ according to AI use (mean 18.6 s vs. 18.4 s, p = 0.452). Reading times increased as abnormality scores increased, and a more significant increase was observed with AI use (coefficient 0.09 vs. 0.06, p < 0.001). Therefore, the reading times of CXRs among radiologists were influenced by the availability of AI. Overall reading times shortened when radiologists referred to AI; however, abnormalities detected by AI could lengthen reading times.
Collapse
Affiliation(s)
- Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, South Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, South Korea
| | - Kyunghwa Han
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, South Korea
| | - Leeha Ryu
- Department of Biostatistics and Computing, Yonsei University Graduate School, 50-1 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722, South Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, South Korea.
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, South Korea.
| |
Collapse
|
15
|
Topff L, Ranschaert ER, Bartels-Rutten A, Negoita A, Menezes R, Beets-Tan RGH, Visser JJ. Artificial Intelligence Tool for Detection and Worklist Prioritization Reduces Time to Diagnosis of Incidental Pulmonary Embolism at CT. Radiol Cardiothorac Imaging 2023; 5:e220163. [PMID: 37124638 PMCID: PMC10141443 DOI: 10.1148/ryct.220163] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/13/2023] [Accepted: 02/20/2023] [Indexed: 05/02/2023]
Abstract
Purpose To evaluate the diagnostic efficacy of artificial intelligence (AI) software in detecting incidental pulmonary embolism (IPE) at CT and shorten the time to diagnosis with use of radiologist reading worklist prioritization. Materials and Methods In this study with historical controls and prospective evaluation, regulatory-cleared AI software was evaluated to prioritize IPE on routine chest CT scans with intravenous contrast agent in adult oncology patients. Diagnostic accuracy metrics were calculated, and temporal end points, including detection and notification times (DNTs), were assessed during three time periods (April 2019 to September 2020): routine workflow without AI, human triage without AI, and worklist prioritization with AI. Results In total, 11 736 CT scans in 6447 oncology patients (mean age, 63 years ± 12 [SD]; 3367 men) were included. Prevalence of IPE was 1.3% (51 of 3837 scans), 1.4% (54 of 3920 scans), and 1.0% (38 of 3979 scans) for the respective time periods. The AI software detected 131 true-positive, 12 false-negative, 31 false-positive, and 11 559 true-negative results, achieving 91.6% sensitivity, 99.7% specificity, 99.9% negative predictive value, and 80.9% positive predictive value. During prospective evaluation, AI-based worklist prioritization reduced the median DNT for IPE-positive examinations to 87 minutes (vs routine workflow of 7714 minutes and human triage of 4973 minutes). Radiologists' missed rate of IPE was significantly reduced from 44.8% (47 of 105 scans) without AI to 2.6% (one of 38 scans) when assisted by the AI tool (P < .001). Conclusion AI-assisted workflow prioritization of IPE on routine CT scans in oncology patients showed high diagnostic accuracy and significantly shortened the time to diagnosis in a setting with a backlog of examinations.Keywords: CT, Computer Applications, Detection, Diagnosis, Embolism, Thorax, ThrombosisSupplemental material is available for this article.© RSNA, 2023See also the commentary by Elicker in this issue.
Collapse
|
16
|
Pierre K, Haneberg AG, Kwak S, Peters KR, Hochhegger B, Sananmuang T, Tunlayadechanont P, Tighe PJ, Mancuso A, Forghani R. Applications of Artificial Intelligence in the Radiology Roundtrip: Process Streamlining, Workflow Optimization, and Beyond. Semin Roentgenol 2023; 58:158-169. [PMID: 37087136 DOI: 10.1053/j.ro.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 02/14/2023] [Indexed: 04/24/2023]
Abstract
There are many impactful applications of artificial intelligence (AI) in the electronic radiology roundtrip and the patient's journey through the healthcare system that go beyond diagnostic applications. These tools have the potential to improve quality and safety, optimize workflow, increase efficiency, and increase patient satisfaction. In this article, we review the role of AI for process improvement and workflow enhancement which includes applications beginning from the time of order entry, scan acquisition, applications supporting the image interpretation task, and applications supporting tasks after image interpretation such as result communication. These non-diagnostic workflow and process optimization tasks are an important part of the arsenal of potential AI tools that can streamline day to day clinical practice and patient care.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Adam G Haneberg
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Sean Kwak
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL
| | - Keith R Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Padcha Tunlayadechanont
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Patrick J Tighe
- Departments of Anesthesiology & Orthopaedic Surgery, University of Florida College of Medicine, Gainesville, FL
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL.
| |
Collapse
|
17
|
Manuel Román-Belmonte J, De la Corte-Rodríguez H, Adriana Rodríguez-Damiani B, Carlos Rodríguez-Merchán E. Artificial Intelligence in Musculoskeletal Conditions. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.110696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Artificial intelligence (AI) refers to computer capabilities that resemble human intelligence. AI implies the ability to learn and perform tasks that have not been specifically programmed. Moreover, it is an iterative process involving the ability of computerized systems to capture information, transform it into knowledge, and process it to produce adaptive changes in the environment. A large labeled database is needed to train the AI system and generate a robust algorithm. Otherwise, the algorithm cannot be applied in a generalized way. AI can facilitate the interpretation and acquisition of radiological images. In addition, it can facilitate the detection of trauma injuries and assist in orthopedic and rehabilitative processes. The applications of AI in musculoskeletal conditions are promising and are likely to have a significant impact on the future management of these patients.
Collapse
|
18
|
Chepelev LL, Kwan D, Kahn CE, Filice RW, Wang KC. Ontologies in the New Computational Age of Radiology: RadLex for Semantics and Interoperability in Imaging Workflows. Radiographics 2023; 43:e220098. [PMID: 36757882 DOI: 10.1148/rg.220098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
From basic research to the bedside, precise terminology is key to advancing medicine and ensuring optimal and appropriate patient care. However, the wide spectrum of diseases and their manifestations superimposed on medical team-specific and discipline-specific communication patterns often impairs shared understanding and the shared use of common medical terminology. Common terms are currently used in medicine to ensure interoperability and facilitate integration of biomedical information for clinical practice and emerging scientific and educational applications alike, from database integration to supporting basic clinical operations such as billing. Such common terminologies can be provided in ontologies, which are formalized representations of knowledge in a particular domain. Ontologies unambiguously specify common concepts and describe the relationships between those concepts by using a form that is mathematically precise and accessible to humans and machines alike. RadLex® is a key RSNA initiative that provides a shared domain model, or ontology, of radiology to facilitate integration of information in radiology education, clinical care, and research. As the contributions of the computational components of common radiologic workflows continue to increase with the ongoing development of big data, artificial intelligence, and novel image analysis and visualization tools, the use of common terminologies is becoming increasingly important for supporting seamless computational resource integration across medicine. This article introduces ontologies, outlines the fundamental semantic web technologies used to create and apply RadLex, and presents examples of RadLex applications in everyday radiology and research. It concludes with a discussion of emerging applications of RadLex, including artificial intelligence applications. © RSNA, 2023 Quiz questions for this article are available in the supplemental material.
Collapse
Affiliation(s)
- Leonid L Chepelev
- From the Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto General Hospital, 585 University Ave, 1-PMB 286, Toronto, ON, Canada M5G 2N2 (L.L.C.); Insygnia Consulting, Toronto, ON, Canada (D.K.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA (C.E.K.); Department of Radiology, MedStar Georgetown University Hospital, Washington, DC (R.W.F.); and Imaging Service, Baltimore VA Medical Center, Baltimore, MD, and Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD (K.C.W.)
| | - David Kwan
- From the Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto General Hospital, 585 University Ave, 1-PMB 286, Toronto, ON, Canada M5G 2N2 (L.L.C.); Insygnia Consulting, Toronto, ON, Canada (D.K.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA (C.E.K.); Department of Radiology, MedStar Georgetown University Hospital, Washington, DC (R.W.F.); and Imaging Service, Baltimore VA Medical Center, Baltimore, MD, and Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD (K.C.W.)
| | - Charles E Kahn
- From the Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto General Hospital, 585 University Ave, 1-PMB 286, Toronto, ON, Canada M5G 2N2 (L.L.C.); Insygnia Consulting, Toronto, ON, Canada (D.K.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA (C.E.K.); Department of Radiology, MedStar Georgetown University Hospital, Washington, DC (R.W.F.); and Imaging Service, Baltimore VA Medical Center, Baltimore, MD, and Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD (K.C.W.)
| | - Ross W Filice
- From the Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto General Hospital, 585 University Ave, 1-PMB 286, Toronto, ON, Canada M5G 2N2 (L.L.C.); Insygnia Consulting, Toronto, ON, Canada (D.K.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA (C.E.K.); Department of Radiology, MedStar Georgetown University Hospital, Washington, DC (R.W.F.); and Imaging Service, Baltimore VA Medical Center, Baltimore, MD, and Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD (K.C.W.)
| | - Kenneth C Wang
- From the Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto General Hospital, 585 University Ave, 1-PMB 286, Toronto, ON, Canada M5G 2N2 (L.L.C.); Insygnia Consulting, Toronto, ON, Canada (D.K.); Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA (C.E.K.); Department of Radiology, MedStar Georgetown University Hospital, Washington, DC (R.W.F.); and Imaging Service, Baltimore VA Medical Center, Baltimore, MD, and Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD (K.C.W.)
| |
Collapse
|
19
|
Yoon BC, Pomerantz SR, Mercaldo ND, Goyal S, L’Italien EM, Lev MH, Buch KA, Buchbinder BR, Chen JW, Conklin J, Gupta R, Hunter GJ, Kamalian SC, Kelly HR, Rapalino O, Rincon SP, Romero JM, He J, Schaefer PW, Do S, González RG. Incorporating algorithmic uncertainty into a clinical machine deep learning algorithm for urgent head CTs. PLoS One 2023; 18:e0281900. [PMID: 36913348 PMCID: PMC10010506 DOI: 10.1371/journal.pone.0281900] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 02/03/2023] [Indexed: 03/14/2023] Open
Abstract
Machine learning (ML) algorithms to detect critical findings on head CTs may expedite patient management. Most ML algorithms for diagnostic imaging analysis utilize dichotomous classifications to determine whether a specific abnormality is present. However, imaging findings may be indeterminate, and algorithmic inferences may have substantial uncertainty. We incorporated awareness of uncertainty into an ML algorithm that detects intracranial hemorrhage or other urgent intracranial abnormalities and evaluated prospectively identified, 1000 consecutive noncontrast head CTs assigned to Emergency Department Neuroradiology for interpretation. The algorithm classified the scans into high (IC+) and low (IC-) probabilities for intracranial hemorrhage or other urgent abnormalities. All other cases were designated as No Prediction (NP) by the algorithm. The positive predictive value for IC+ cases (N = 103) was 0.91 (CI: 0.84-0.96), and the negative predictive value for IC- cases (N = 729) was 0.94 (0.91-0.96). Admission, neurosurgical intervention, and 30-day mortality rates for IC+ was 75% (63-84), 35% (24-47), and 10% (4-20), compared to 43% (40-47), 4% (3-6), and 3% (2-5) for IC-. There were 168 NP cases, of which 32% had intracranial hemorrhage or other urgent abnormalities, 31% had artifacts and postoperative changes, and 29% had no abnormalities. An ML algorithm incorporating uncertainty classified most head CTs into clinically relevant groups with high predictive values and may help accelerate the management of patients with intracranial hemorrhage or other urgent intracranial abnormalities.
Collapse
Affiliation(s)
- Byung C. Yoon
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Stuart R. Pomerantz
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Data Science Office, Boston, MA, United States of America
| | - Nathaniel D. Mercaldo
- Massachusetts General Hospital Institute for Technology Assessment, Boston, MA, United States of America
| | - Swati Goyal
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Department of Radiology/ Information Systems, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Eric M. L’Italien
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Department of Radiology/ Information Systems, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Michael H. Lev
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Karen A. Buch
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Bradley R. Buchbinder
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - John W. Chen
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Massachusetts General Hospital Center for Systems Biology (CSB), Boston, MA, United States of America
| | - John Conklin
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Rajiv Gupta
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Massachusetts General Hospital Consortia for Integration of Medicine and Innovative Technologies (CIMIT), Boston, MA, United States of America
- Massachusetts General Hospital CT Innovation and Advanced X-ray Imaging Science (AXIS) Center, Boston, MA, United States of America
| | - George J. Hunter
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Shahmir C. Kamalian
- Emergency Radiology & Neuroradiology Divisions, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Hillary R. Kelly
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts Eye and Ear Institute, Harvard Medical School, Boston, MA, United States of America
| | - Otto Rapalino
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Sandra P. Rincon
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Javier M. Romero
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Julian He
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Pamela W. Schaefer
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Enterprise Radiology, Boston, MA, United States of America
| | - Synho Do
- Mass General Brigham Data Science Office, Boston, MA, United States of America
| | - Ramon Gilberto González
- Neuroradiology Division, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Mass General Brigham Data Science Office, Boston, MA, United States of America
- Massachusetts General Hospital Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, United States of America
- * E-mail:
| |
Collapse
|
20
|
Smorchkova AK, Khoruzhaya AN, Kremneva EI, Petryaikin AV. [Machine learning technologies in CT-based diagnostics and classification of intracranial hemorrhages]. ZHURNAL VOPROSY NEIROKHIRURGII IMENI N. N. BURDENKO 2023; 87:85-91. [PMID: 37011333 DOI: 10.17116/neiro20238702185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
This review discusses pooled experience of creation, implementation and effectiveness of machine learning technologies in CT-based diagnosis of intracranial hemorrhages. The authors analyzed 21 original articles between 2015 and 2022 using the following keywords: «intracranial hemorrhage», «machine learning», «deep learning», «artificial intelligence». The review contains general data on basic concepts of machine learning and also considers in more detail such aspects as technical characteristics of data sets used for creation of AI algorithms for certain type of clinical task, their possible impact on effectiveness and clinical experience.
Collapse
Affiliation(s)
- A K Smorchkova
- Moscow Research Practical Clinical Center for Diagnostics and Telemedicine Technologies, Moscow, Russia
| | - A N Khoruzhaya
- Moscow Research Practical Clinical Center for Diagnostics and Telemedicine Technologies, Moscow, Russia
| | - E I Kremneva
- Moscow Research Practical Clinical Center for Diagnostics and Telemedicine Technologies, Moscow, Russia
- Neurology Research Center, Moscow, Russia
| | - A V Petryaikin
- Moscow Research Practical Clinical Center for Diagnostics and Telemedicine Technologies, Moscow, Russia
| |
Collapse
|
21
|
Detection of Incidental Pulmonary Embolism on Conventional Contrast-Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports. AJR Am J Roentgenol 2022; 219:895-902. [PMID: 35822644 DOI: 10.2214/ajr.22.27895] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND. Artificial intelligence (AI) algorithms have shown strong performance for detection of pulmonary embolism (PE) on CT examinations performed using a dedicated protocol for PE detection. AI performance is less well studied for detecting PE on examinations ordered for reasons other than suspected PE (i.e., incidental PE [iPE]). OBJECTIVE. The purpose of this study was to assess the diagnostic performance of an AI algorithm for detection of iPE on conventional contrast-enhanced chest CT examinations. METHODS. This retrospective study included 2555 patients (mean age, 53.2 ± 14.5 [SD] years; 1340 women, 1215 men) who underwent 3003 conventional contrast-enhanced chest CT examinations (i.e., not using pulmonary CTA protocols) between September 2019 and February 2020. A commercial AI algorithm was applied to the images to detect acute iPE. A vendor-supplied natural language processing (NLP) algorithm was applied to the clinical reports to identify examinations interpreted as positive for iPE. For all examinations that were positive by the AI-based image review or by NLP-based report review, a multireader adjudication process was implemented to establish a reference standard for iPE. Images were also reviewed to identify explanations of AI misclassifications. RESULTS. On the basis of the adjudication process, the frequency of iPE was 1.3% (40/3003). AI detected four iPEs missed by clinical reports, and clinical reports detected seven iPEs missed by AI. AI, compared with clinical reports, exhibited significantly lower PPV (86.8% vs 97.3%, p = .03) and specificity (99.8% vs 100.0%, p = .045). Differences in sensitivity (82.5% vs 90.0%, p = .37) and NPV (99.8% vs 99.9%, p = .36) were not significant. For AI, neither sensitivity nor specificity varied significantly in association with age, sex, patient status, or cancer-related clinical scenario (all p > .05). Explanations of false-positives by AI included metastatic lymph nodes and pulmonary venous filling defect, and explanations of false-negatives by AI included surgically altered anatomy and small-caliber subsegmental vessels. CONCLUSION. AI had high NPV and moderate PPV for iPE detection, detecting some iPEs missed by radiologists. CLINICAL IMPACT. Potential applications of the AI tool include serving as a second reader to help detect additional iPEs or as a worklist triage tool to allow earlier iPE detection and intervention. Various explanations of AI misclassifications may provide targets for model improvement.
Collapse
|
22
|
Lee S, Jeong B, Kim M, Jang R, Paik W, Kang J, Chung WJ, Hong GS, Kim N. Emergency triage of brain computed tomography via anomaly detection with a deep generative model. Nat Commun 2022; 13:4251. [PMID: 35869112 PMCID: PMC9307758 DOI: 10.1038/s41467-022-31808-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 07/05/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTriage is essential for the early diagnosis and reporting of neurologic emergencies. Herein, we report the development of an anomaly detection algorithm (ADA) with a deep generative model trained on brain computed tomography (CT) images of healthy individuals that reprioritizes radiology worklists and provides lesion attention maps for brain CT images with critical findings. In the internal and external validation datasets, the ADA achieved area under the curve values (95% confidence interval) of 0.85 (0.81–0.89) and 0.87 (0.85–0.89), respectively, for detecting emergency cases. In a clinical simulation test of an emergency cohort, the median wait time was significantly shorter post-ADA triage than pre-ADA triage by 294 s (422.5 s [interquartile range, IQR 299] to 70.5 s [IQR 168]), and the median radiology report turnaround time was significantly faster post-ADA triage than pre-ADA triage by 297.5 s (445.0 s [IQR 298] to 88.5 s [IQR 179]) (all p < 0.001).
Collapse
|
23
|
Lam Shin Cheung J, Ali A, Abdalla M, Fine B. U"AI" Testing: User Interface and Usability Testing of a Chest X-ray AI Tool in a Simulated Real-World Workflow. Can Assoc Radiol J 2022; 74:314-325. [PMID: 36189838 DOI: 10.1177/08465371221131200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Purpose: To observe interactions of practicing radiologists with a chest x-ray AI tool and evaluate its usability and impact on workflow efficiency. Methods: Using a simulated clinical workflow and remote multi-monitor screensharing, we prospectively assessed the interactions of 10 staff radiologists (5-33 years of experience) with a PACS-embedded, regulatory-approved chest x-ray AI tool. Qualitatively, we collected feedback using a think-aloud method and post-testing semi-structured interview; transcript themes were categorized by: (1) AI tool features, (2) deployment considerations, and (3) broad human-AI interactions. Quantitatively, we used time-stamped video recordings to compare reporting and decision-making efficiency with and without AI assistance. Results: For AI tool features, radiologists appreciated the simple binary classification (normal vs abnormal) and found the heatmap essential to understand what the AI considered abnormal; users were uncertain of how to interpret confidence values. Regarding deployment considerations, radiologists thought the tool would be especially helpful for identifying subtle diagnoses; opinions were mixed on whether the tool impacted perceived efficiency, accuracy, and confidence. Considering general human-AI interactions, radiologists shared concerns about automation bias especially when relying on an automated triage function. Regarding decision-making and workflow efficiency, participants began dictating 5 seconds later (42% increase, P = .02) and took 14 seconds longer to complete cases (33% increase, P = .09) with AI assistance. Conclusions: Radiologist usability testing provided insights into effective AI tool features, deployment considerations, and human-AI interactions that can guide successful AI deployment. Early AI adoption may increase radiologists' decision-making and total reporting time but improves with experience.
Collapse
Affiliation(s)
| | - Amna Ali
- Institute for Better Health, 5543Trillium Health Partners, Mississauga, ON, Canada
| | - Mohamed Abdalla
- Department of Computer Science, 7938University of Toronto, Toronto, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Benjamin Fine
- Institute for Better Health, 5543Trillium Health Partners, Mississauga, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada.,Department of Medical Imaging, 7938University of Toronto, Toronto, ON, Canada
| |
Collapse
|
24
|
Warman R, Warman A, Warman P, Degnan A, Blickman J, Chowdhary V, Dash D, Sangal R, Vadhan J, Bueso T, Windisch T, Neves G. Deep Learning System Boosts Radiologist Detection of Intracranial Hemorrhage. Cureus 2022; 14:e30264. [PMID: 36381767 PMCID: PMC9653089 DOI: 10.7759/cureus.30264] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/13/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Intracranial hemorrhage (ICH) requires emergent medical treatment for positive outcomes. While previous artificial intelligence (AI) solutions achieved rapid diagnostics, none were shown to improve the performance of radiologists in detecting ICHs. Here, we show that the Caire ICH artificial intelligence system enhances a radiologist's ICH diagnosis performance. METHODS A dataset of non-contrast-enhanced axial cranial computed tomography (CT) scans (n=532) were labeled for the presence or absence of an ICH. If an ICH was detected, its ICH subtype was identified. After a washout period, the three radiologists reviewed the same dataset with the assistance of the Caire ICH system. Performance was measured with respect to reader agreement, accuracy, sensitivity, and specificity when compared to the ground truth, defined as reader consensus. RESULTS Caire ICH improved the inter-reader agreement on average by 5.76% in a dataset with an ICH prevalence of 74.3%. Further, radiologists using Caire ICH detected an average of 18 more ICHs and significantly increased their accuracy by 6.15%, their sensitivity by 4.6%, and their specificity by 10.62%. The Caire ICH system also improved the radiologist's ability to accurately identify the ICH subtypes present. CONCLUSION The Caire ICH device significantly improves the performance of a cohort of radiologists. Such a device has the potential to be a tool that can improve patient outcomes and reduce misdiagnosis of ICH.
Collapse
Affiliation(s)
| | | | | | - Andrew Degnan
- Radiology, University of Pittsburgh Medical Center (UPMC) Children's Hospital of Pittsburgh, Pittsburgh, USA
| | | | | | - Dev Dash
- Emergency Medicine, Stanford University, Stanford, USA
| | - Rohit Sangal
- Emergency Medicine, Yale School of Medicine, New Haven, USA
| | - Jason Vadhan
- Emergency Medicine, The University of Texas Southwestern (UTSW), Dallas, USA
| | - Tulio Bueso
- Neurology, The Texas Tech University Health Sciences Center (TTUHSC), Lubbock, USA
| | | | - Gabriel Neves
- Neurology, The Texas Tech University Health Sciences Center (TTUHSC), Lubbock, USA
| |
Collapse
|
25
|
Performance of a Chest Radiography AI Algorithm for Detection of Missed or Mislabeled Findings: A Multicenter Study. Diagnostics (Basel) 2022; 12:diagnostics12092086. [PMID: 36140488 PMCID: PMC9497851 DOI: 10.3390/diagnostics12092086] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 08/23/2022] [Accepted: 08/25/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose: We assessed whether a CXR AI algorithm was able to detect missed or mislabeled chest radiograph (CXR) findings in radiology reports. Methods: We queried a multi-institutional radiology reports search database of 13 million reports to identify all CXR reports with addendums from 1999–2021. Of the 3469 CXR reports with an addendum, a thoracic radiologist excluded reports where addenda were created for typographic errors, wrong report template, missing sections, or uninterpreted signoffs. The remaining reports contained addenda (279 patients) with errors related to side-discrepancies or missed findings such as pulmonary nodules, consolidation, pleural effusions, pneumothorax, and rib fractures. All CXRs were processed with an AI algorithm. Descriptive statistics were performed to determine the sensitivity, specificity, and accuracy of the AI in detecting missed or mislabeled findings. Results: The AI had high sensitivity (96%), specificity (100%), and accuracy (96%) for detecting all missed and mislabeled CXR findings. The corresponding finding-specific statistics for the AI were nodules (96%, 100%, 96%), pneumothorax (84%, 100%, 85%), pleural effusion (100%, 17%, 67%), consolidation (98%, 100%, 98%), and rib fractures (87%, 100%, 94%). Conclusions: The CXR AI could accurately detect mislabeled and missed findings. Clinical Relevance: The CXR AI can reduce the frequency of errors in detection and side-labeling of radiographic findings.
Collapse
|
26
|
Wismüller A, DSouza AM, Abidin AZ, Ali Vosoughi M, Gange C, Cortopassi IO, Bozovic G, Bankier AA, Batra K, Chodakiewitz Y, Xi Y, Whitlow CT, Ponnatapura J, Wendt GJ, Weinberg EP, Stockmaster L, Shrier DA, Shin MC, Modi R, Lo HS, Kligerman S, Hamid A, Hahn LD, Garcia GM, Chung JH, Altes T, Abbara S, Bader AS. Early-stage COVID-19 pandemic observations on pulmonary embolism using nationwide multi-institutional data harvesting. NPJ Digit Med 2022; 5:120. [PMID: 35986059 PMCID: PMC9388980 DOI: 10.1038/s41746-022-00653-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 07/06/2022] [Indexed: 11/29/2022] Open
Abstract
We introduce a multi-institutional data harvesting (MIDH) method for longitudinal observation of medical imaging utilization and reporting. By tracking both large-scale utilization and clinical imaging results data, the MIDH approach is targeted at measuring surrogates for important disease-related observational quantities over time. To quantitatively investigate its clinical applicability, we performed a retrospective multi-institutional study encompassing 13 healthcare systems throughout the United States before and after the 2020 COVID-19 pandemic. Using repurposed software infrastructure of a commercial AI-based image analysis service, we harvested data on medical imaging service requests and radiology reports for 40,037 computed tomography pulmonary angiograms (CTPA) to evaluate for pulmonary embolism (PE). Specifically, we compared two 70-day observational periods, namely (i) a pre-pandemic control period from 11/25/2019 through 2/2/2020, and (ii) a period during the early COVID-19 pandemic from 3/8/2020 through 5/16/2020. Natural language processing (NLP) on final radiology reports served as the ground truth for identifying positive PE cases, where we found an NLP accuracy of 98% for classifying radiology reports as positive or negative for PE based on a manual review of 2,400 radiology reports. Fewer CTPA exams were performed during the early COVID-19 pandemic than during the pre-pandemic period (9806 vs. 12,106). However, the PE positivity rate was significantly higher (11.6 vs. 9.9%, p < 10-4) with an excess of 92 PE cases during the early COVID-19 outbreak, i.e., ~1.3 daily PE cases more than statistically expected. Our results suggest that MIDH can contribute value as an exploratory tool, aiming at a better understanding of pandemic-related effects on healthcare.
Collapse
Affiliation(s)
- Axel Wismüller
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester Medical Center, Rochester, NY, USA
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
- Faculty of Medicine, Ludwig Maximilian University of Munich, Munich, Germany
| | - Adora M DSouza
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Anas Z Abidin
- Department of Biomedical Engineering, University of Rochester Medical Center, Rochester, NY, USA
| | - M Ali Vosoughi
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Christopher Gange
- Department of Radiology & Biomedical Sciences, Yale University School of Medicine, New Haven, CT, USA
| | - Isabel O Cortopassi
- Department of Radiology, Mayo Clinic College of Medicine and Science, Jacksonville, FL, USA
| | - Gracijela Bozovic
- Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Alexander A Bankier
- Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Kiran Batra
- Department of Radiology, University of Texas, Southwestern Medical Center, Dallas, TX, USA
| | - Yosef Chodakiewitz
- Department of Imaging, S. Mark Taper Foundation Imaging Center, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Yin Xi
- Department of Radiology, University of Texas, Southwestern Medical Center, Dallas, TX, USA
| | | | | | - Gary J Wendt
- Department of Radiology, University of Wisconsin, Madison, WI, USA
| | - Eric P Weinberg
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - Larry Stockmaster
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - David A Shrier
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - Min Chul Shin
- Department of Radiology, Christiana Care Health System, Newark, DE, USA
| | - Roshan Modi
- Department of Radiology, Christiana Care Health System, Newark, DE, USA
| | - Hao Steven Lo
- Department of Radiology, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Seth Kligerman
- Department of Radiology, University of California, San Diego, San Diego, CA, USA
| | - Aws Hamid
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, GA, USA
| | - Lewis D Hahn
- Department of Radiology, University of California, San Diego, San Diego, CA, USA
| | | | - Jonathan H Chung
- Department of Radiology, University of Chicago, Chicago, IL, USA
| | | | - Suhny Abbara
- Department of Radiology, University of Texas, Southwestern Medical Center, Dallas, TX, USA
| | - Anna S Bader
- Department of Radiology & Biomedical Sciences, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
27
|
Optimizing Operation Room Utilization—A Prediction Model. BIG DATA AND COGNITIVE COMPUTING 2022. [DOI: 10.3390/bdcc6030076] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Background: Operating rooms are the core of hospitals. They are a primary source of revenue and are often seen as one of the bottlenecks in the medical system. Many efforts are made to increase throughput, reduce costs, and maximize incomes, as well as optimize clinical outcomes and patient satisfaction. We trained a predictive model on the length of surgeries to improve the productivity and utility of operative rooms in general hospitals. Methods: We collected clinical and administrative data for the last 10 years from two large general public hospitals in Israel. We trained a machine learning model to give the expected length of surgery using pre-operative data. These data included diagnoses, laboratory tests, risk factors, demographics, procedures, anesthesia type, and the main surgeon’s level of experience. We compared our model to a naïve model that represented current practice. Findings: Our prediction model achieved better performance than the naïve model and explained almost 70% of the variance in surgery durations. Interpretation: A machine learning-based model can be a useful approach for increasing operating room utilization. Among the most important factors were the type of procedures and the main surgeon’s level of experience. The model enables the harmonizing of hospital productivity through wise scheduling and matching suitable teams for a variety of clinical procedures for the benefit of the individual patient and the system as a whole.
Collapse
|
28
|
Sharma M, Savage C, Nair M, Larsson I, Svedberg P, Nygren JM. Artificial Intelligence Applications in Health Care Practice: A Scoping Review (Preprint). J Med Internet Res 2022; 24:e40238. [PMID: 36197712 PMCID: PMC9582911 DOI: 10.2196/40238] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/19/2022] [Accepted: 08/30/2022] [Indexed: 11/25/2022] Open
Abstract
Background Artificial intelligence (AI) is often heralded as a potential disruptor that will transform the practice of medicine. The amount of data collected and available in health care, coupled with advances in computational power, has contributed to advances in AI and an exponential growth of publications. However, the development of AI applications does not guarantee their adoption into routine practice. There is a risk that despite the resources invested, benefits for patients, staff, and society will not be realized if AI implementation is not better understood. Objective The aim of this study was to explore how the implementation of AI in health care practice has been described and researched in the literature by answering 3 questions: What are the characteristics of research on implementation of AI in practice? What types and applications of AI systems are described? What characteristics of the implementation process for AI systems are discernible? Methods A scoping review was conducted of MEDLINE (PubMed), Scopus, Web of Science, CINAHL, and PsycINFO databases to identify empirical studies of AI implementation in health care since 2011, in addition to snowball sampling of selected reference lists. Using Rayyan software, we screened titles and abstracts and selected full-text articles. Data from the included articles were charted and summarized. Results Of the 9218 records retrieved, 45 (0.49%) articles were included. The articles cover diverse clinical settings and disciplines; most (32/45, 71%) were published recently, were from high-income countries (33/45, 73%), and were intended for care providers (25/45, 56%). AI systems are predominantly intended for clinical care, particularly clinical care pertaining to patient-provider encounters. More than half (24/45, 53%) possess no action autonomy but rather support human decision-making. The focus of most research was on establishing the effectiveness of interventions (16/45, 35%) or related to technical and computational aspects of AI systems (11/45, 24%). Focus on the specifics of implementation processes does not yet seem to be a priority in research, and the use of frameworks to guide implementation is rare. Conclusions Our current empirical knowledge derives from implementations of AI systems with low action autonomy and approaches common to implementations of other types of information systems. To develop a specific and empirically based implementation framework, further research is needed on the more disruptive types of AI systems being implemented in routine care and on aspects unique to AI implementation in health care, such as building trust, addressing transparency issues, developing explainable and interpretable solutions, and addressing ethical concerns around privacy and data protection.
Collapse
Affiliation(s)
- Malvika Sharma
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Medical Management Centre, Stockholm, Sweden
| | - Carl Savage
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Medical Management Centre, Stockholm, Sweden
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Monika Nair
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
29
|
Charting the potential of brain computed tomography deep learning systems. J Clin Neurosci 2022; 99:217-223. [DOI: 10.1016/j.jocn.2022.03.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 02/17/2022] [Accepted: 03/08/2022] [Indexed: 12/22/2022]
|
30
|
Iorga M, Drakopoulos M, Naidech AM, Katsaggelos AK, Parrish TB, Hill VB. Labeling Noncontrast Head CT Reports for Common Findings Using Natural Language Processing. AJNR Am J Neuroradiol 2022; 43:721-726. [PMID: 35483905 PMCID: PMC9089256 DOI: 10.3174/ajnr.a7500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 03/14/2022] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE Prioritizing reading of noncontrast head CT examinations through an automated triage system may improve time to care for patients with acute neuroradiologic findings. We present a natural language-processing approach for labeling findings in noncontrast head CT reports, which permits creation of a large, labeled dataset of head CT images for development of emergent-finding detection and reading-prioritization algorithms. MATERIALS AND METHODS In this retrospective study, 1002 clinical radiology reports from noncontrast head CTs collected between 2008 and 2013 were manually labeled across 12 common neuroradiologic finding categories. Each report was then encoded using an n-gram model of unigrams, bigrams, and trigrams. A logistic regression model was then trained to label each report for every common finding. Models were trained and assessed using a combination of L2 regularization and 5-fold cross-validation. RESULTS Model performance was strongest for the fracture, hemorrhage, herniation, mass effect, pneumocephalus, postoperative status, and volume loss models in which the area under the receiver operating characteristic curve exceeded 0.95. Performance was relatively weaker for the edema, hydrocephalus, infarct, tumor, and white-matter disease models (area under the receiver operating characteristic curve > 0.85). Analysis of coefficients revealed finding-specific words among the top coefficients in each model. Class output probabilities were found to be a useful indicator of predictive error on individual report examples in higher-performing models. CONCLUSIONS Combining logistic regression with n-gram encoding is a robust approach to labeling common findings in noncontrast head CT reports.
Collapse
Affiliation(s)
- M Iorga
- From the Departments of Radiology (M.I., M.D., T.B.P., V.B.H.)
- Departments of Biomedical Engineering (M.I., A.K.K., T.B.P.)
| | - M Drakopoulos
- From the Departments of Radiology (M.I., M.D., T.B.P., V.B.H.)
| | - A M Naidech
- Neurology (A.M.N.), Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - A K Katsaggelos
- Departments of Biomedical Engineering (M.I., A.K.K., T.B.P.)
- Electrical and Computer Engineering (A.K.K.)
- Computer Science (A.K.K.), Northwestern University, Chicago, Illinois
| | - T B Parrish
- From the Departments of Radiology (M.I., M.D., T.B.P., V.B.H.)
- Departments of Biomedical Engineering (M.I., A.K.K., T.B.P.)
| | - V B Hill
- From the Departments of Radiology (M.I., M.D., T.B.P., V.B.H.)
| |
Collapse
|
31
|
Seyam M, Weikert T, Sauter A, Brehm A, Psychogios MN, Blackham KA. Utilization of Artificial Intelligence-based Intracranial Hemorrhage Detection on Emergent Noncontrast CT Images in Clinical Workflow. Radiol Artif Intell 2022; 4:e210168. [PMID: 35391777 DOI: 10.1148/ryai.210168] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 01/10/2022] [Accepted: 01/20/2022] [Indexed: 01/23/2023]
Abstract
Authors implemented an artificial intelligence (AI)-based detection tool for intracranial hemorrhage (ICH) on noncontrast CT images into an emergent workflow, evaluated its diagnostic performance, and assessed clinical workflow metrics compared with pre-AI implementation. The finalized radiology report constituted the ground truth for the analysis, and CT examinations (n = 4450) before and after implementation were retrieved using various keywords for ICH. Diagnostic performance was assessed, and mean values with their respective 95% CIs were reported to compare workflow metrics (report turnaround time, communication time of a finding, consultation time of another specialty, and turnaround time in the emergency department). Although practicable diagnostic performance was observed for overall ICH detection with 93.0% diagnostic accuracy, 87.2% sensitivity, and 97.8% negative predictive value, the tool yielded lower detection rates for specific subtypes of ICH (eg, 69.2% [74 of 107] for subdural hemorrhage and 77.4% [24 of 31] for acute subarachnoid hemorrhage). Common false-positive findings included postoperative and postischemic defects (23.6%, 37 of 157), artifacts (19.7%, 31 of 157), and tumors (15.3%, 24 of 157). Although workflow metrics such as communicating a critical finding (70 minutes [95% CI: 54, 85] vs 63 minutes [95% CI: 55, 71]) were on average reduced after implementation, future efforts are necessary to streamline the workflow all along the workflow chain. It is crucial to define a clear framework and recognize limitations as AI tools are only as reliable as the environment in which they are deployed. Keywords: CT, CNS, Stroke, Diagnosis, Classification, Application Domain © RSNA, 2022.
Collapse
Affiliation(s)
- Muhannad Seyam
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| | - Thomas Weikert
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| | - Alexander Sauter
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| | - Alex Brehm
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| | - Marios-Nikos Psychogios
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| | - Kristine A Blackham
- Department of Diagnostic and Interventional Neuroradiology, Clinic of Radiology and Nuclear Medicine (M.S., A.B., M.N.P., K.A.B.), and Department of Radiology and Nuclear Medicine (T.W., A.S.), University Hospital of Basel, Petersgraben 4, 4031 Basel, Switzerland; and Department of Neurologic Sciences, University of Vermont Medical Center, Burlington, Vt (M.S.)
| |
Collapse
|
32
|
Laur O, Wang B. Musculoskeletal trauma and artificial intelligence: current trends and projections. Skeletal Radiol 2022; 51:257-269. [PMID: 34089338 DOI: 10.1007/s00256-021-03824-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 05/13/2021] [Accepted: 05/18/2021] [Indexed: 02/02/2023]
Abstract
Musculoskeletal trauma accounts for a significant fraction of emergency department visits and patients seeking urgent care, with a high financial cost to society. Diagnostic imaging is indispensable in the workup and management of trauma patients. However, diagnostic imaging represents a complex multifaceted system, with many aspects of its workflow prone to inefficiencies or human error. Recent technological innovations in artificial intelligence and machine learning have shown promise to revolutionize our systems for providing medical care to patients. This review will provide a general overview of the current state of artificial intelligence and machine learning applications in different aspects of trauma imaging and provide a vision for how such applications could be leveraged to enhance our diagnostic imaging systems and optimize patient outcomes.
Collapse
Affiliation(s)
- Olga Laur
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA
| | - Benjamin Wang
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA.
| |
Collapse
|
33
|
Kau T, Ziurlys M, Taschwer M, Kloss-Brandstätter A, Grabner G, Deutschmann H. FDA-approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single-center study. Neuroradiology 2022; 64:981-990. [PMID: 34988593 DOI: 10.1007/s00234-021-02874-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 12/01/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE To assess an FDA-approved and CE-certified deep learning (DL) software application compared to the performance of human radiologists in detecting intracranial hemorrhages (ICH). METHODS Within a 20-week trial from January to May 2020, 2210 adult non-contrast head CT scans were performed in a single center and automatically analyzed by an artificial intelligence (AI) solution with workflow integration. After excluding 22 scans due to severe motion artifacts, images were retrospectively assessed for the presence of ICHs by a second-year resident and a certified radiologist under simulated time pressure. Disagreements were resolved by a subspecialized neuroradiologist serving as the reference standard. We calculated interrater agreement and diagnostic performance parameters, including the Breslow-Day and Cochran-Mantel-Haenszel tests. RESULTS An ICH was present in 214 out of 2188 scans. The interrater agreement between the resident and the certified radiologist was very high (κ = 0.89) and even higher (κ = 0.93) between the resident and the reference standard. The software has delivered 64 false-positive and 68 false-negative results giving an overall sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 68.2%, 96.8%, 69.5%, 96.6%, and 94.0%, respectively. Corresponding values for the resident were 94.9%, 99.2%, 93.1%, 99.4%, and 98.8%. The accuracy of the DL application was inferior (p < 0.001) to that of both the resident and the certified neuroradiologist. CONCLUSION A resident under time pressure outperformed an FDA-approved DL program in detecting ICH in CT scans. Our results underline the importance of thoughtful workflow integration and post-approval validation of AI applications in various clinical environments.
Collapse
Affiliation(s)
- Thomas Kau
- Department of Radiology, Landeskrankenhaus Villach, Nikolaigasse 43, 9500, Villach, Austria. .,Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria.
| | - Mindaugas Ziurlys
- Department of Radiology, Landeskrankenhaus Villach, Nikolaigasse 43, 9500, Villach, Austria
| | - Manuel Taschwer
- Department of Radiology, Landeskrankenhaus Villach, Nikolaigasse 43, 9500, Villach, Austria
| | | | - Günther Grabner
- Department of Medical Engineering, Carinthia University of Applied Sciences, Primoschgasse 8, 9020, Klagenfurt, Austria
| | - Hannes Deutschmann
- Division of Neuroradiology, Vascular and Interventional Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| |
Collapse
|
34
|
How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiol 2022; 52:2087-2093. [PMID: 34117522 PMCID: PMC9537124 DOI: 10.1007/s00247-021-05114-8] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/08/2021] [Accepted: 05/24/2021] [Indexed: 12/11/2022]
Abstract
Since the introduction of artificial intelligence (AI) in radiology, the promise has been that it will improve health care and reduce costs. Has AI been able to fulfill that promise? We describe six clinical objectives that can be supported by AI: a more efficient workflow, shortened reading time, a reduction of dose and contrast agents, earlier detection of disease, improved diagnostic accuracy and more personalized diagnostics. We provide examples of use cases including the available scientific evidence for its impact based on a hierarchical model of efficacy. We conclude that the market is still maturing and little is known about the contribution of AI to clinical practice. More real-world monitoring of AI in clinical practice is expected to aid in determining the value of AI and making informed decisions on development, procurement and reimbursement.
Collapse
|
35
|
Dyer T, Chawda S, Alkilani R, Morgan TN, Hughes M, Rasalingham S. Validation of an artificial intelligence solution for acute triage and rule-out normal of non-contrast CT head scans. Neuroradiology 2021; 64:735-743. [PMID: 34623478 DOI: 10.1007/s00234-021-02826-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 09/10/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE Non-contrast CT head scans provide rapid and accurate diagnosis of acute head injury; however, increased utilisation of CT head scans makes it difficult to prioritise acutely unwell patients and places pressure on busy emergency departments (EDs). This study validates an AI algorithm to triage patients presenting with Intracranial Haemorrhage (ICH) or Acute Infarct whilst also identifying a subset of patients as Normal, with the potential to function as a rule-out test. METHODS In total, 390 CT head scans were collected from 3 institutions in the UK, US and India. Ground-truth labels were assigned by 3 FRCR consultant radiologists. AI performance, as well as the performance of 3 independent radiologists, was measured against ground-truth labels. RESULTS The algorithm showed AUC values of 0.988 (0.978-0.994), 0.933 (0.901-0.961) and 0.939 (0.919-0.958) for ICH, Acute Infarct and Normal, respectively. Sensitivity/specificity for ICH and Acute Infarct were 0.988/0.925 and 0.833/0.927, respectively, compared to 0.907/0.991 and 0.618/0.977 for radiologists. AI rule-out of Normal scans achieved 0.93% negative predictive value (NPV) for the removal of 54.3% of Normal cases, compared to 86.8% NPV for radiologists. CONCLUSION We show our algorithm can provide effective triage of ICH and Acute Infarct to prioritise acutely unwell patients. AI can also benefit clinical accuracy, with the algorithm identifying 91.3% of radiologist false negatives for ICH and 69.1% for Acute Infarct. Rule-out of Normal scans has huge potential for workload management in busy EDs, in this case removing 27.4% of all scans with no acute findings missed.
Collapse
Affiliation(s)
- Tom Dyer
- Behold.ai, 180 Borough high St, London, SE1 1LB, UK.
| | - Sanjiv Chawda
- Department of Radiology, Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, RM7 0AG, UK
| | - Raed Alkilani
- Department of Radiology, Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, RM7 0AG, UK
| | | | - Mike Hughes
- Behold.ai, 180 Borough high St, London, SE1 1LB, UK
| | | |
Collapse
|
36
|
Parkinson C, Matthams C, Foley K, Spezi E. Artificial intelligence in radiation oncology: A review of its current status and potential application for the radiotherapy workforce. Radiography (Lond) 2021; 27 Suppl 1:S63-S68. [PMID: 34493445 DOI: 10.1016/j.radi.2021.07.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/05/2021] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Radiation oncology is a continually evolving speciality. With the development of new imaging modalities and advanced imaging processing techniques, there is an increasing amount of data available to practitioners. In this narrative review, Artificial Intelligence (AI) is used as a reference to machine learning, and its potential, along with current problems in the field of radiation oncology, are considered from a technical position. KEY FINDINGS AI has the potential to harness the availability of data for improving patient outcomes, reducing toxicity, and easing clinical burdens. However, problems including the requirement of complexity of data, undefined core outcomes and limited generalisability are apparent. CONCLUSION This original review highlights considerations for the radiotherapy workforce, particularly therapeutic radiographers, as there will be an increasing requirement for their familiarity with AI due to their unique position as the interface between imaging technology and patients. IMPLICATIONS FOR PRACTICE Collaboration between AI experts and the radiotherapy workforce are required to overcome current issues before clinical adoption. The development of educational resources and standardised reporting of AI studies may help facilitate this.
Collapse
Affiliation(s)
- C Parkinson
- School of Engineering, Cardiff University, UK.
| | | | | | - E Spezi
- School of Engineering, Cardiff University, UK
| |
Collapse
|
37
|
O’Connor SD, Bhalla M. Should Artificial Intelligence Tell Radiologists Which Study to Read Next? Radiol Artif Intell 2021; 3:e210009. [PMID: 33939773 PMCID: PMC8035575 DOI: 10.1148/ryai.2021210009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Stacy D. O’Connor
- From the Departments of Radiology (S.D.O., M.B.) and Surgery (S.D.O.), Medical College of Wisconsin, 9200 W Wisconsin Ave, Milwaukee, WI 53226
| | - Manav Bhalla
- From the Departments of Radiology (S.D.O., M.B.) and Surgery (S.D.O.), Medical College of Wisconsin, 9200 W Wisconsin Ave, Milwaukee, WI 53226
| |
Collapse
|