1
|
Franco PN, Maino C, Mariani I, Gandola DG, Sala D, Bologna M, Talei Franzesi C, Corso R, Ippolito D. Diagnostic performance of an AI algorithm for the detection of appendicular bone fractures in pediatric patients. Eur J Radiol 2024; 178:111637. [PMID: 39053306 DOI: 10.1016/j.ejrad.2024.111637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
PURPOSE To evaluate the diagnostic performance of an Artificial Intelligence (AI) algorithm, previously trained using both adult and pediatric patients, for the detection of acute appendicular fractures in the pediatric population on conventional X-ray radiography (CXR). MATERIALS AND METHODS In this retrospective study, anonymized extremities CXRs of pediatric patients (age <17 years), with or without fractures, were included. Six hundred CXRs (maintaining the positive-for-fracture and negative-for-fracture balance) were included, grouping them per body part (shoulder/clavicle, elbow/upper arm, hand/wrist, leg/knee, foot/ankle). Follow-up CXRs and/or second-level imaging were considered as reference standard. A deep learning algorithm interpreted CXRs for fracture detection on a per-patient, per-radiograph, and per-location level, and its diagnostic performance values were compared with the reference standard. AI diagnostic performance was computed by using cross-tables, and 95 % confidence intervals [CIs] were obtained by bootstrapping. RESULTS The final cohort included 312 male and 288 female with a mean age of 8.9±4.5 years. Three undred CXRs (50 %) were positive for fractures, according to the reference standard. For all fractures, the AI tool showed a per-patient 91.3% (95%CIs = 87.6-94.3) sensitivity, 76.7% (71.5-81.3) specificity, and 84% (82.1-86.0) accuracy. In the per-radiograph analysis the AI tool showed 85% (81.9-87.8) sensitivity, 88.5% (86.3-90.4) specificity, and 87.2% (85.7-89.6) accuracy. In the per-location analysis, the AI tool identified 606 bounding boxes: 472 (77.9 %) were correct, 110 (18.1 %) incorrect, and 24 (4.0 %) were not-overlapping. CONCLUSION The AI algorithm provides good overall diagnostic performance for detecting appendicular fractures in pediatric patients.
Collapse
Affiliation(s)
- Paolo Niccolò Franco
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy
| | - Cesare Maino
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy.
| | - Ilaria Mariani
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy
| | - Davide Giacomo Gandola
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy
| | - Davide Sala
- Synbrain The Human-Machine Cooperation - AI/ML solutions, Corso Milano 23, 20900 Monza, MB, Italy
| | - Marco Bologna
- Synbrain The Human-Machine Cooperation - AI/ML solutions, Corso Milano 23, 20900 Monza, MB, Italy
| | - Cammillo Talei Franzesi
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy
| | - Rocco Corso
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy
| | - Davide Ippolito
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, MB, Italy; Department of Medicine and Surgery - University of Milano Bicocca, Via Cadore 33, 20090 Monza, MB, Italy
| |
Collapse
|
2
|
Weber MA. Easily missed pathologies of the musculoskeletal system in the emergency radiology setting. ROFO-FORTSCHR RONTG 2024. [PMID: 39094774 DOI: 10.1055/a-2369-8330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
The musculoskeletal region is the main area in terms of easily missed pathologies in the emergency radiology setting, because the majority of diagnoses missed in the emergency setting are fractures.A review of the literature was performed by searching the PubMed and ScienceDirect databases, using the keywords ('missed injuries' or 'missed fractures') and ('emergency radiology' or 'emergency room') and ('musculoskeletal' or 'bone' or 'skeleton') for the title and abstract query. The inclusion criteria were scientific papers presented in the English and German languages. Among the 347 relevant hits between 1980 and 2024 as identified by the author of this review article, there were 114 relevant articles from the years between 2018 and 2024. Based on this literature search and the author's personal experience, this study presents useful information for reducing the number of missed pathologies in the musculoskeletal system in the emergency radiology setting.Predominant factors that make up the majority of missed fractures are 'subtle but still visible fractures' and 'radiographically imperceptible fractures'. Radiologists are able to minimize the factors contributing to fractures being missed. For example, implementing a 'four-eyes principle', i.e., two readers read the radiographs, would help to overcome the missing of 'subtle but still visible fractures' and the additional use of cross-sectional imaging would help to overcome the missing of 'radiographically imperceptible fractures'. Knowledge of what is commonly missed and evaluation of high-risk areas with utmost care also increase the diagnostic performance of radiologists. · Radiological imaging in an emergency setting increases the likelihood of radiological diagnostic errors, such as missing musculoskeletal pathologies.. · The majority of diagnoses missed in the emergency setting are fractures.. · To lessen the number of easily missed pathologies in the musculoskeletal system in the emergency radiology setting, a systematic approach is necessary.. · Adequate training of radiologists in emergency radiology and close collaboration with clinical partners are important measures to decrease the number of missed musculoskeletal injuries.. · Weber MA. Easily missed pathologies of the musculoskeletal system in the emergency radiology setting. Fortschr Röntgenstr 2024; DOI 10.1055/a-2369-8330.
Collapse
Affiliation(s)
- Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| |
Collapse
|
3
|
J O, S L, S G, B H, S M N. An overview of the performance of AI in fracture detection in lumbar and thoracic spine radiographs on a per vertebra basis. Skeletal Radiol 2024; 53:1563-1571. [PMID: 38413400 PMCID: PMC11194188 DOI: 10.1007/s00256-024-04626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 02/29/2024]
Abstract
PURPOSE Subtle spinal compression fractures can easily be missed. AI may help in interpreting these images. We propose to test the performance of an FDA-approved algorithm for fracture detection in radiographs on a per vertebra basis, assessing performance based on grade of compression, presence of foreign material, severity of degenerative changes, and acuity of the fracture. METHODS Thoracic and lumbar spine radiographs with inquiries for fracture were retrospectively collected and analyzed by the AI. The presence or absence of fracture was defined by the written report or cross-sectional imaging where available. Fractures were classified semi-quantitatively by the Genant classification, by acuity, by the presence of foreign material, and overall degree of degenerative change of the spine. The results of the AI were compared to the gold standard. RESULTS A total of 512 exams were included, depicting 4114 vertebra with 495 fractures. Overall sensitivity was 63.2% for the lumbar spine, significantly higher than the thoracic spine with 50.6%. Specificity was 96.7 and 98.3% respectively. Sensitivity increased with fracture grade, without a significant difference between grade 2 and 3 compression fractures (lumbar spine: grade 1, 52.5%; grade 2, 72.3%; grade 3, 75.8%; thoracic spine: grade 1, 42.4%; grade 2, 60.0%; grade 3, 60.0%). The presence of foreign material and a high degree of degenerative changes reduced sensitivity. CONCLUSION Overall performance of the AI on a per vertebra basis was degraded in clinically relevant scenarios such as for low-grade compression fractures.
Collapse
Affiliation(s)
- Oppenheimer J
- Charité Universitätsmedizin Berlin, Klinik für Radiologie, Campus Benjamin FranklinHindenburgdamm 30, 12203, Berlin, Germany.
| | - Lüken S
- Charité Universitätsmedizin Berlin, Klinik für Radiologie, Campus Benjamin FranklinHindenburgdamm 30, 12203, Berlin, Germany
| | - Geveshausen S
- Charité Universitätsmedizin Berlin, Klinik für Radiologie, Campus Benjamin FranklinHindenburgdamm 30, 12203, Berlin, Germany
| | - Hamm B
- Charité Universitätsmedizin Berlin, Klinik für Radiologie, Campus Benjamin FranklinHindenburgdamm 30, 12203, Berlin, Germany
| | - Niehues S M
- Charité Universitätsmedizin Berlin, Klinik für Radiologie, Campus Benjamin FranklinHindenburgdamm 30, 12203, Berlin, Germany
| |
Collapse
|
4
|
Novak A, Ather S, Gill A, Aylward P, Maskell G, Cowell GW, Espinosa Morgado AT, Duggan T, Keevill M, Gamble O, Akrama O, Belcher E, Taberham R, Hallifax R, Bahra J, Banerji A, Bailey J, James A, Ansaripour A, Spence N, Wrightson J, Jarral W, Barry S, Bhatti S, Astley K, Shadmaan A, Ghelman S, Baenen A, Oke J, Bloomfield C, Johnson H, Beggs M, Gleeson F. Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying pneumothoraces on plain chest X-ray: a multi-case multi-reader study. Emerg Med J 2024:emermed-2023-213620. [PMID: 39009424 DOI: 10.1136/emermed-2023-213620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 06/10/2024] [Indexed: 07/17/2024]
Abstract
BACKGROUND Artificial intelligence (AI)-assisted image interpretation is a fast-developing area of clinical innovation. Most research to date has focused on the performance of AI-assisted algorithms in comparison with that of radiologists rather than evaluating the algorithms' impact on the clinicians who often undertake initial image interpretation in routine clinical practice. This study assessed the impact of AI-assisted image interpretation on the diagnostic performance of frontline acute care clinicians for the detection of pneumothoraces (PTX). METHODS A multicentre blinded multi-case multi-reader study was conducted between October 2021 and January 2022. The online study recruited 18 clinician readers from six different clinical specialties, with differing levels of seniority, across four English hospitals. The study included 395 plain CXR images, 189 positive for PTX and 206 negative. The reference standard was the consensus opinion of two thoracic radiologists with a third acting as arbitrator. General Electric Healthcare Critical Care Suite (GEHC CCS) PTX algorithm was applied to the final dataset. Readers individually interpreted the dataset without AI assistance, recording the presence or absence of a PTX and a confidence rating. Following a 'washout' period, this process was repeated including the AI output. RESULTS Analysis of the performance of the algorithm for detecting or ruling out a PTX revealed an overall AUROC of 0.939. Overall reader sensitivity increased by 11.4% (95% CI 4.8, 18.0, p=0.002) from 66.8% (95% CI 57.3, 76.2) unaided to 78.1% aided (95% CI 72.2, 84.0, p=0.002), specificity 93.9% (95% CI 90.9, 97.0) without AI to 95.8% (95% CI 93.7, 97.9, p=0.247). The junior reader subgroup showed the largest improvement at 21.7% (95% CI 10.9, 32.6), increasing from 56.0% (95% CI 37.7, 74.3) to 77.7% (95% CI 65.8, 89.7, p<0.01). CONCLUSION The study indicates that AI-assisted image interpretation significantly enhances the diagnostic accuracy of clinicians in detecting PTX, particularly benefiting less experienced practitioners. While overall interpretation time remained unchanged, the use of AI improved diagnostic confidence and sensitivity, especially among junior clinicians. These findings underscore the potential of AI to support less skilled clinicians in acute care settings.
Collapse
Affiliation(s)
- Alex Novak
- Emergency Department, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Sarim Ather
- Radiology Department, Oxford University Hospitals, Oxford, UK
| | - Avneet Gill
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Peter Aylward
- Report and Image Quality Control (RAIQC), London, UK, UK
| | - Giles Maskell
- Royal Cornwall Hospitals NHS Trust, Truro, Cornwall, UK
| | | | | | - Tom Duggan
- Buckinghamshire Healthcare NHS Trust, Amersham, UK
| | - Melissa Keevill
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Olivia Gamble
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Osama Akrama
- Emergency Department, Royal Berkshire NHS Foundation Trust, Reading, UK
| | | | - Rhona Taberham
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Rob Hallifax
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Jasdeep Bahra
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | - Jon Bailey
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Antonia James
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Ali Ansaripour
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Nathan Spence
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - John Wrightson
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Waqas Jarral
- Frimley Health NHS Foundation Trust, Frimley, UK
| | - Steven Barry
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Saher Bhatti
- Frimley Health NHS Foundation Trust, Frimley, UK
| | - Kerry Astley
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Amied Shadmaan
- GE Healthcare Diagnostic Imaging, Little Chalfont, Buckinghamshire, UK
| | | | | | - Jason Oke
- University of Oxford Greyfriars, Oxford, UK
| | | | | | - Mark Beggs
- University of Oxford, Oxford, Oxfordshire, UK
| | - Fergus Gleeson
- Radiology Department, Oxford University Hospitals, Oxford, UK
| |
Collapse
|
5
|
Strehlow M, Alvarez A, Blomkalns AL, Caretta-Wyer H, Gharahbaghian L, Imler D, Khan A, Lee M, Lobo V, Newberry JA, Riberia R, Sebok-Syer S, Shen S, Gisondi MA. Precision emergency medicine. Acad Emerg Med 2024. [PMID: 38940478 DOI: 10.1111/acem.14962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 04/13/2024] [Accepted: 05/23/2024] [Indexed: 06/29/2024]
Abstract
BACKGROUND Precision health is a burgeoning scientific discipline that aims to incorporate individual variability in biological, behavioral, and social factors to develop personalized health solutions. To date, emergency medicine has not deeply engaged in the precision health movement. However, rapid advances in health technology, data science, and medical informatics offer new opportunities for emergency medicine to realize the promises of precision health. METHODS In this article, we conceptualize precision emergency medicine as an emerging paradigm and identify key drivers of its implementation into current and future clinical practice. We acknowledge important obstacles to the specialty-wide adoption of precision emergency medicine and offer solutions that conceive a successful path forward. RESULTS Precision emergency medicine is defined as the use of information and technology to deliver acute care effectively, efficiently, and authentically to individual patients and their communities. Key drivers and opportunities include leveraging human data, capitalizing on technology and digital tools, providing deliberate access to care, advancing population health, and reimagining provider education and roles. Overcoming challenges in equity, privacy, and cost is essential for success. We close with a call to action to proactively incorporate precision health into the clinical practice of emergency medicine, the training of future emergency physicians, and the research agenda of the specialty. CONCLUSIONS Precision emergency medicine leverages new technology and data-driven artificial intelligence to advance diagnostic testing, individualize patient care plans and therapeutics, and strategically refine the convergence of the health system and the community.
Collapse
Affiliation(s)
- Matthew Strehlow
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Al'ai Alvarez
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Andra L Blomkalns
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Holly Caretta-Wyer
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Laleh Gharahbaghian
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Daniel Imler
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Ayesha Khan
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Moon Lee
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Viveta Lobo
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Jennifer A Newberry
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Ryan Riberia
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Stefanie Sebok-Syer
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Sam Shen
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Michael A Gisondi
- Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
6
|
Lassalle L, Regnard NE, Ventre J, Marty V, Clovis L, Zhang Z, Nitche N, Guermazi A, Laredo JD. Automated weight-bearing foot measurements using an artificial intelligence-based software. Skeletal Radiol 2024:10.1007/s00256-024-04726-z. [PMID: 38880791 DOI: 10.1007/s00256-024-04726-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 06/05/2024] [Accepted: 06/05/2024] [Indexed: 06/18/2024]
Abstract
OBJECTIVE To assess the accuracy of an artificial intelligence (AI) software (BoneMetrics, Gleamer) in performing automated measurements on weight-bearing forefoot and lateral foot radiographs. METHODS Consecutive forefoot and lateral foot radiographs were retrospectively collected from three imaging institutions. Two senior musculoskeletal radiologists independently annotated key points to measure the hallux valgus, first-second metatarsal, and first-fifth metatarsal angles on forefoot radiographs and the talus-first metatarsal, medial arch, and calcaneus inclination angles on lateral foot radiographs. The ground truth was defined as the mean of their measurements. Statistical analysis included mean absolute error (MAE), bias assessed with Bland-Altman analysis between the ground truth and AI prediction, and intraclass coefficient (ICC) between the manual ratings. RESULTS Eighty forefoot radiographs were included (53 ± 17 years, 50 women), and 26 were excluded. Ninety-seven lateral foot radiographs were included (51 ± 20 years, 46 women), and 21 were excluded. MAE for the hallux valgus, first-second metatarsal, and first-fifth metatarsal angles on forefoot radiographs were respectively 1.2° (95% CI [1; 1.4], bias = - 0.04°, ICC = 0.98), 0.7° (95% CI [0.6; 0.9], bias = - 0.19°, ICC = 0.91) and 0.9° (95% CI [0.7; 1.1], bias = 0.44°, ICC = 0.96). MAE for the talus-first, medial arch, and calcaneal inclination angles on the lateral foot radiographs were respectively 3.9° (95% CI [3.4; 4.5], bias = 0.61° ICC = 0.88), 1.5° (95% CI [1.2; 1.8], bias = - 0.18°, ICC = 0.95) and 1° (95% CI [0.8; 1.2], bias = 0.74°, ICC = 0.99). Bias and MAE between the ground truth and the AI prediction were low across all measurements. ICC between the two manual ratings was excellent, except for the talus-first metatarsal angle. CONCLUSION AI demonstrated potential for accurate and automated measurements on weight-bearing forefoot and lateral foot radiographs.
Collapse
Affiliation(s)
- Louis Lassalle
- Réseau Imagerie Sud Francilien, Lieusaint, France.
- Clinique du Mousseau, Ramsay Santé, Evry, France.
- , Gleamer, Paris, France.
| | - Nor-Eddine Regnard
- Réseau Imagerie Sud Francilien, Lieusaint, France
- Clinique du Mousseau, Ramsay Santé, Evry, France
- , Gleamer, Paris, France
| | | | | | | | | | | | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Jean-Denis Laredo
- , Gleamer, Paris, France
- Service de Radiologie, Institut Mutualiste Montsouris, Paris, France
- Laboratoire (B3OA) de Biomécanique Et Biomatériaux Ostéo-Articulaires, Faculté de Médecine Paris-Cité, Paris, France
- Professeur Émérite d'Imagerie Médicale, Université Paris-Cité, Paris, France
| |
Collapse
|
7
|
Suresh V, Singh KK, Vaish E, Gurjar M, Ambuli Nambi A, Khulbe Y, Muzaffar S. Artificial Intelligence in the Intensive Care Unit: Current Evidence on an Inevitable Future Tool. Cureus 2024; 16:e59797. [PMID: 38846182 PMCID: PMC11154024 DOI: 10.7759/cureus.59797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/07/2024] [Indexed: 06/09/2024] Open
Abstract
Artificial intelligence (AI) is a technique that attempts to replicate human intelligence, analytical behavior, and decision-making ability. This includes machine learning, which involves the use of algorithms and statistical techniques to enhance the computer's ability to make decisions more accurately. Due to AI's ability to analyze, comprehend, and interpret considerable volumes of data, it has been increasingly used in the field of healthcare. In critical care medicine, where most of the patient load requires timely interventions due to the perilous nature of the condition, AI's ability to monitor, analyze, and predict unfavorable outcomes is an invaluable asset. It can significantly improve timely interventions and prevent unfavorable outcomes, which, otherwise, is not always achievable owing to the constrained human ability to multitask with optimum efficiency. AI has been implicated in intensive care units over the past many years. In addition to its advantageous applications, this article discusses its disadvantages, prospects, and the changes needed to train future critical care professionals. A comprehensive search of electronic databases was performed using relevant keywords. Data from articles pertinent to the topic was assimilated into this review article.
Collapse
Affiliation(s)
- Vinay Suresh
- General Medicine and Surgery, King George's Medical University, Lucknow, IND
| | - Kaushal K Singh
- General Medicine, King George's Medical University, Lucknow, IND
| | - Esha Vaish
- Internal Medicine, Mount Sinai Morningside West, New York, USA
| | - Mohan Gurjar
- Critical Care Medicine, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, IND
| | | | - Yashita Khulbe
- General Medicine and Surgery, King George's Medical University, Lucknow, IND
| | - Syed Muzaffar
- Critical Care Medicine, King George's Medical University, Lucknow, IND
| |
Collapse
|
8
|
Jacques T, Cardot N, Ventre J, Demondion X, Cotten A. Commercially-available AI algorithm improves radiologists' sensitivity for wrist and hand fracture detection on X-ray, compared to a CT-based ground truth. Eur Radiol 2024; 34:2885-2894. [PMID: 37919408 DOI: 10.1007/s00330-023-10380-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 11/04/2023]
Abstract
OBJECTIVES Algorithms for fracture detection are spreading in clinical practice, but the use of X-ray-only ground truth can induce bias in their evaluation. This study assessed radiologists' performances to detect wrist and hand fractures on radiographs, using a commercially-available algorithm, compared to a computerized tomography (CT) ground truth. METHODS Post-traumatic hand and wrist CT and concomitant X-ray examinations were retrospectively gathered. Radiographs were labeled based on CT findings. The dataset was composed of 296 consecutive cases: 118 normal (39.9%), 178 pathological (60.1%) with a total of 267 fractures visible in CT. Twenty-three radiologists with various levels of experience reviewed all radiographs without AI, then using it, blinded towards CT results. RESULTS Using AI improved radiologists' sensitivity (Se, 0.658 to 0.703, p < 0.0001) and negative predictive value (NPV, 0.585 to 0.618, p < 0.0001), without affecting their specificity (Sp, 0.885 vs 0.891, p = 0.91) or positive predictive value (PPV, 0.887 vs 0.899, p = 0.08). On the radiographic dataset, based on the CT ground truth, stand-alone AI performances were 0.771 (Se), 0.898 (Sp), 0.684 (NPV), 0.915 (PPV), and 0.764 (AUROC) which were lower than previously reported, suggesting a potential underestimation of the number of missed fractures in the AI literature. CONCLUSIONS AI enabled radiologists to improve their sensitivity and negative predictive value for wrist and hand fracture detection on radiographs, without affecting their specificity or positive predictive value, compared to a CT-based ground truth. Using CT as gold standard for X-ray labels is innovative, leading to algorithm performance poorer than reported elsewhere, but probably closer to clinical reality. CLINICAL RELEVANCE STATEMENT Using an AI algorithm significantly improved radiologists' sensitivity and negative predictive value in detecting wrist and hand fractures on radiographs, with ground truth labels based on CT findings. KEY POINTS • Using CT as a ground truth for labeling X-rays is new in AI literature, and led to algorithm performance significantly poorer than reported elsewhere (AUROC: 0.764), but probably closer to clinical reality. • AI enabled radiologists to significantly improve their sensitivity (+ 4.5%) and negative predictive value (+ 3.3%) for the detection of wrist and hand fractures on X-rays. • There was no significant change in terms of specificity or positive predictive value.
Collapse
Affiliation(s)
- Thibaut Jacques
- Department of Musculoskeletal Radiology, Lille University Hospital, Rue du Professeur Emile Laine, 59000, Lille, France.
- IRIS Radiology - Clinique Lille Sud, SOS Hands and Fingers, 96 Rue Gustave Delory, 59810, Lesquin, France.
| | - Nicolas Cardot
- Department of Musculoskeletal Radiology, Lille University Hospital, Rue du Professeur Emile Laine, 59000, Lille, France
| | | | - Xavier Demondion
- Department of Musculoskeletal Radiology, Lille University Hospital, Rue du Professeur Emile Laine, 59000, Lille, France
- Lille University School of Medicine, 59000, Lille, France
| | - Anne Cotten
- Department of Musculoskeletal Radiology, Lille University Hospital, Rue du Professeur Emile Laine, 59000, Lille, France
- Lille University School of Medicine, 59000, Lille, France
- MABLab - Marrow Adiposity and Bone Lab ULR4490, University of Lille, 59000, Lille, France
| |
Collapse
|
9
|
Hoppe BF, Rueckel J, Dikhtyar Y, Heimer M, Fink N, Sabel BO, Ricke J, Rudolph J, Cyran CC. Implementing Artificial Intelligence for Emergency Radiology Impacts Physicians' Knowledge and Perception: A Prospective Pre- and Post-Analysis. Invest Radiol 2024; 59:404-412. [PMID: 37843828 DOI: 10.1097/rli.0000000000001034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
PURPOSE The aim of this study was to evaluate the impact of implementing an artificial intelligence (AI) solution for emergency radiology into clinical routine on physicians' perception and knowledge. MATERIALS AND METHODS A prospective interventional survey was performed pre-implementation and 3 months post-implementation of an AI algorithm for fracture detection on radiographs in late 2022. Radiologists and traumatologists were asked about their knowledge and perception of AI on a 7-point Likert scale (-3, "strongly disagree"; +3, "strongly agree"). Self-generated identification codes allowed matching the same individuals pre-intervention and post-intervention, and using Wilcoxon signed rank test for paired data. RESULTS A total of 47/71 matched participants completed both surveys (66% follow-up rate) and were eligible for analysis (34 radiologists [72%], 13 traumatologists [28%], 15 women [32%]; mean age, 34.8 ± 7.8 years). Postintervention, there was an increase that AI "reduced missed findings" (1.28 [pre] vs 1.94 [post], P = 0.003) and made readers "safer" (1.21 vs 1.64, P = 0.048), but not "faster" (0.98 vs 1.21, P = 0.261). There was a rising disagreement that AI could "replace the radiological report" (-2.04 vs -2.34, P = 0.038), as well as an increase in self-reported knowledge about "clinical AI," its "chances," and its "risks" (0.40 vs 1.00, 1.21 vs 1.70, and 0.96 vs 1.34; all P 's ≤ 0.028). Radiologists used AI results more frequently than traumatologists ( P < 0.001) and rated benefits higher (all P 's ≤ 0.038), whereas senior physicians were less likely to use AI or endorse its benefits (negative correlation with age, -0.35 to 0.30; all P 's ≤ 0.046). CONCLUSIONS Implementing AI for emergency radiology into clinical routine has an educative aspect and underlines the concept of AI as a "second reader," to support and not replace physicians.
Collapse
Affiliation(s)
- Boj Friedrich Hoppe
- From the Department of Radiology, University Hospital, LMU Munich, Munich, Germany (B.F.J., J.Rueckel, Y.D., M.H., N.F., B.O.S., J.Ricke, J.Rudolph, C.C.C.); and Institute of Neuroradiology, University Hospital, LMU Munich, Munich, Germany (J.R.)
| | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Hansen V, Jensen J, Kusk MW, Gerke O, Tromborg HB, Lysdahlgaard S. Deep learning performance compared to healthcare experts in detecting wrist fractures from radiographs: A systematic review and meta-analysis. Eur J Radiol 2024; 174:111399. [PMID: 38428318 DOI: 10.1016/j.ejrad.2024.111399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/29/2024] [Accepted: 02/26/2024] [Indexed: 03/03/2024]
Abstract
OBJECTIVE To perform a systematic review and meta-analysis of the diagnostic accuracy of deep learning (DL) algorithms in the diagnosis of wrist fractures (WF) on plain wrist radiographs, taking healthcare experts consensus as reference standard. METHODS Embase, Medline, PubMed, Scopus and Web of Science were searched in the period from 1 Jan 2012 to 9 March 2023. Eligible studies were patients with wrist radiographs for radial and ulnar fractures as the target condition, studies using DL algorithms based on convolutional neural networks (CNN), and healthcare experts consensus as the minimum reference standard. Studies were assessed with a modified QUADAS-2 tool, and we applied a bivariate random-effects model for meta-analysis of diagnostic test accuracy data. RESULTS Our study was registered at PROSPERO with ID: CRD42023431398. We included 6 unique studies for meta-analysis, with a total of 33,026 radiographs. CNN performance compared to reference standards for the included articles found a summary sensitivity of 92% (95% CI: 80%-97%) and a summary specificity of 93% (95% CI: 76%-98%). The generalized bivariate I-squared statistic indicated considerable heterogeneity between the studies (81.90%). Four studies had one or more domains at high risk of bias and two studies had concerns regarding applicability. CONCLUSION The diagnostic accuracy of CNNs was comparable to that of healthcare experts in wrist radiographs for investigation of WF. There is a need for studies with a robust reference standard, external data-set validation and investigation of diagnostic performance of healthcare experts aided with CNNs. CLINICAL RELEVANCE STATEMENT DL matches healthcare experts in diagnosing WFs, which potentially benefits patient diagnosis.
Collapse
Affiliation(s)
- V Hansen
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark
| | - J Jensen
- Department of Radiology, Odense University Hospital, Odense, Denmark; Research and Innovation Unit of Radiology, University of Southern Denmark, Odense, Denmark
| | - M W Kusk
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield 4, Dublin, Ireland
| | - O Gerke
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - H B Tromborg
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Orthopedic Surgery, Odense University Hospital, Odense, Denmark
| | - S Lysdahlgaard
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark.
| |
Collapse
|
11
|
Fu T, Viswanathan V, Attia A, Zerbib-Attal E, Kosaraju V, Barger R, Vidal J, Bittencourt LK, Faraji N. Assessing the Potential of a Deep Learning Tool to Improve Fracture Detection by Radiologists and Emergency Physicians on Extremity Radiographs. Acad Radiol 2024; 31:1989-1999. [PMID: 37993303 DOI: 10.1016/j.acra.2023.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/23/2023] [Accepted: 10/25/2023] [Indexed: 11/24/2023]
Abstract
RATIONALE AND OBJECTIVES To evaluate the standalone performance of a deep learning (DL) based fracture detection tool on extremity radiographs and assess the performance of radiologists and emergency physicians in identifying fractures of the extremities with and without the DL aid. MATERIALS AND METHODS The DL tool was previously developed using 132,000 appendicular skeletal radiographs divided into 87% training, 11% validation, and 2% test sets. Stand-alone performance was evaluated on 2626 de-identified radiographs from a single institution in Ohio, including at least 140 exams per body region. Consensus from three US board-certified musculoskeletal (MSK) radiologists served as ground truth. A multi-reader retrospective study was performed in which 24 readers (eight each of emergency physicians, non-MSK radiologists, and MSK radiologists) identified fractures in 186 cases during two independent sessions with and without DL aid, separated by a one-month washout period. The accuracy (area under the receiver operating curve), sensitivity, specificity, and reading time were compared with and without model aid. RESULTS The model achieved a stand-alone accuracy of 0.986, sensitivity of 0.987, and specificity of 0.885, and high accuracy (> 0.95) across stratification for body part, age, gender, radiographic views, and scanner type. With DL aid, reader accuracy increased by 0.047 (95% CI: 0.034, 0.061; p = 0.004) and sensitivity significantly improved from 0.865 (95% CI: 0.848, 0.881) to 0.955 (95% CI: 0.944, 0.964). Average reading time was shortened by 7.1 s (27%) per exam. When stratified by physician type, this improvement was greater for emergency physicians and non-MSK radiologists. CONCLUSION The DL tool demonstrated high stand-alone accuracy, aided physician diagnostic accuracy, and decreased interpretation time.
Collapse
Affiliation(s)
- Tianyuan Fu
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.).
| | - Vidya Viswanathan
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Alexandre Attia
- Azmed, 10 Rue d'Uzès, 75002, Paris, France (A.A., E.Z.A., J.V.)
| | | | - Vijaya Kosaraju
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Richard Barger
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Julien Vidal
- Azmed, 10 Rue d'Uzès, 75002, Paris, France (A.A., E.Z.A., J.V.)
| | - Leonardo K Bittencourt
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Navid Faraji
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| |
Collapse
|
12
|
Lechien JR. Editorial letter: Artificial Intelligence can be used to improve the humanity of care. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08691-0. [PMID: 38687377 DOI: 10.1007/s00405-024-08691-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 04/15/2024] [Indexed: 05/02/2024]
Affiliation(s)
- Jerome R Lechien
- Department of Laryngology and Broncho-Esophagology, EpiCURA Hospital, Anatomy Department of University of Mons, Mons, Belgium.
- Department of Otolaryngoly-Head Neck Surgery, Foch Hospital, University of Paris Saclay, Paris, France.
- Phonetics and Phonology Laboratory, UMR 7018 CNRS, Université Sorbonne Nouvelle/Paris 3, Paris, France.
| |
Collapse
|
13
|
Dell’Aria A, Tack D, Saddiki N, Makdoud S, Alexiou J, De Hemptinne FX, Berkenbaum I, Neugroschl C, Tacelli N. Radiographic Detection of Post-Traumatic Bone Fractures: Contribution of Artificial Intelligence Software to the Analysis of Senior and Junior Radiologists. J Belg Soc Radiol 2024; 108:44. [PMID: 38680721 PMCID: PMC11049681 DOI: 10.5334/jbsr.3574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 04/08/2024] [Indexed: 05/01/2024] Open
Abstract
Objectives The aims of this study were: (a) to evaluate the performance of an artificial intelligence (AI) software package (Boneview Trauma, Gleamer) for the detection of post-traumatic bone fractures in radiography as a standalone; (b) used by two radiologists (osteoarticular senior and junior); and (c) to determine to whom AI would be most helpful. Materials and Methods Within 14 days of a trauma, 101 consecutive patients underwent radiographic examination of the upper or lower limbs. The definite diagnosis for identifying fractures was: (a) radio-clinical consensus between the radiologist on-call who analyzed the images and the orthopedist (Group 1); (b) Cone Beam computed tomography (CBCT) exploration of the area of interest, in case of doubts or absence of consensus (Group 2). Independently of this diagnosis for both groups, the radiographic images were separately analyzed by two radiologists (osteoarticular senior: SR; junior: JR) prior without, and thereafter with the results of AI. Results AI performed better than the radiologists in detecting common fractures (Group 1), but not subtle fractures (Group 2). In association with AI, both radiologists increased their overall performances in both groups, whereas this increase was significantly higher for the JR (p < 0.05). Conclusion AI is reliable for common radiographic fracture identification and is a useful learning tool for radiologists in training. However, the software's overall performance does not exceed that of an osteoarticular senior radiologist, particularly in case of subtle lesions.
Collapse
Affiliation(s)
- Andrea Dell’Aria
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Denis Tack
- Department of Radiology, Centre Hospitalier EpiCURA, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Najat Saddiki
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Sonia Makdoud
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université d’Alger 1- Faculté de Médecine d’Alger-Ziania, Algiers, Algeria
| | - Jean Alexiou
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| | | | - Ivan Berkenbaum
- Department of Orthopaedic Surgery, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Carine Neugroschl
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Nunzia Tacelli
- Department of Radiology, Hôpitaux Iris-Sud (HIS), Université libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
14
|
Wilhelm NJ, von Schacky CE, Lindner FJ, Feucht MJ, Ehmann Y, Pogorzelski J, Haddadin S, Neumann J, Hinterwimmer F, von Eisenhart-Rothe R, Jung M, Russe MF, Izadpanah K, Siebenlist S, Burgkart R, Rupp MC. Multicentric development and validation of a multi-scale and multi-task deep learning model for comprehensive lower extremity alignment analysis. Artif Intell Med 2024; 150:102843. [PMID: 38553152 DOI: 10.1016/j.artmed.2024.102843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 03/11/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
Osteoarthritis of the knee, a widespread cause of knee disability, is commonly treated in orthopedics due to its rising prevalence. Lower extremity misalignment, pivotal in knee injury etiology and management, necessitates comprehensive mechanical alignment evaluation via frequently-requested weight-bearing long leg radiographs (LLR). Despite LLR's routine use, current analysis techniques are error-prone and time-consuming. To address this, we conducted a multicentric study to develop and validate a deep learning (DL) model for fully automated leg alignment assessment on anterior-posterior LLR, targeting enhanced reliability and efficiency. The DL model, developed using 594 patients' LLR and a 60%/10%/30% data split for training, validation, and testing, executed alignment analyses via a multi-step process, employing a detection network and nine specialized networks. It was designed to assess all vital anatomical and mechanical parameters for standard clinical leg deformity analysis and preoperative planning. Accuracy, reliability, and assessment duration were compared with three specialized orthopedic surgeons across two distinct institutional datasets (136 and 143 radiographs). The algorithm exhibited equivalent performance to the surgeons in terms of alignment accuracy (DL: 0.21 ± 0.18°to 1.06 ± 1.3°vs. OS: 0.21 ± 0.16°to 1.72 ± 1.96°), interrater reliability (ICC DL: 0.90 ± 0.05 to 1.0 ± 0.0 vs. ICC OS: 0.90 ± 0.03 to 1.0 ± 0.0), and clinically acceptable accuracy (DL: 53.9%-100% vs OS 30.8%-100%). Further, automated analysis significantly reduced analysis time compared to manual annotation (DL: 22 ± 0.6 s vs. OS; 101.7 ± 7 s, p ≤ 0.01). By demonstrating that our algorithm not only matches the precision of expert surgeons but also significantly outpaces them in both speed and consistency of measurements, our research underscores a pivotal advancement in harnessing AI to enhance clinical efficiency and decision-making in orthopaedics.
Collapse
Affiliation(s)
- Nikolas J Wilhelm
- Department of Orthopedics and Sports Orthopedics, Klinikum rechts der Isar, School of Medicine, Munich, Germany; Munich Institute of Robotics and Machine Intelligence, Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany.
| | - Claudio E von Schacky
- Department of Radiology, Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Felix J Lindner
- Department of Orthopedic Sports Medicine , Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Matthias J Feucht
- Department of Orthopedics and Trauma Surgery, Medical Center, Faculty of Medicine, Albert-Ludwigs-University of Freiburg, Freiburg, Germany; Orthopedic Clinic Paulinenhilfe, Diakonie-Hospital, Stuttgart, Germany
| | - Yannick Ehmann
- Department of Orthopedic Sports Medicine , Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Jonas Pogorzelski
- Department of Orthopedic Sports Medicine , Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Sami Haddadin
- Munich Institute of Robotics and Machine Intelligence, Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany
| | - Jan Neumann
- Department of Radiology, Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Florian Hinterwimmer
- Department of Orthopedics and Sports Orthopedics, Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Rüdiger von Eisenhart-Rothe
- Department of Orthopedics and Sports Orthopedics, Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Matthias Jung
- Department of Radiology, Medical Center, Faculty of Medicine, Albert-Ludwigs-University of Freiburg, Freiburg, Germany
| | - Maximilian F Russe
- Department of Radiology, Medical Center, Faculty of Medicine, Albert-Ludwigs-University of Freiburg, Freiburg, Germany
| | - Kaywan Izadpanah
- Department of Radiology, Medical Center, Faculty of Medicine, Albert-Ludwigs-University of Freiburg, Freiburg, Germany
| | - Sebastian Siebenlist
- Department of Orthopedic Sports Medicine , Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Rainer Burgkart
- Department of Orthopedics and Sports Orthopedics, Klinikum rechts der Isar, School of Medicine, Munich, Germany
| | - Marco-Christopher Rupp
- Department of Orthopedic Sports Medicine , Klinikum rechts der Isar, School of Medicine, Munich, Germany
| |
Collapse
|
15
|
Russe MF, Rebmann P, Tran PH, Kellner E, Reisert M, Bamberg F, Kotter E, Kim S. AI-based X-ray fracture analysis of the distal radius: accuracy between representative classification, detection and segmentation deep learning models for clinical practice. BMJ Open 2024; 14:e076954. [PMID: 38262641 PMCID: PMC10823998 DOI: 10.1136/bmjopen-2023-076954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/21/2023] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To aid in selecting the optimal artificial intelligence (AI) solution for clinical application, we directly compared performances of selected representative custom-trained or commercial classification, detection and segmentation models for fracture detection on musculoskeletal radiographs of the distal radius by aligning their outputs. DESIGN AND SETTING This single-centre retrospective study was conducted on a random subset of emergency department radiographs from 2008 to 2018 of the distal radius in Germany. MATERIALS AND METHODS An image set was created to be compatible with training and testing classification and segmentation models by annotating examinations for fractures and overlaying fracture masks, if applicable. Representative classification and segmentation models were trained on 80% of the data. After output binarisation, their derived fracture detection performances as well as that of a standard commercially available solution were compared on the remaining X-rays (20%) using mainly accuracy and area under the receiver operating characteristic (AUROC). RESULTS A total of 2856 examinations with 712 (24.9%) fractures were included in the analysis. Accuracies reached up to 0.97 for the classification model, 0.94 for the segmentation model and 0.95 for BoneView. Cohen's kappa was at least 0.80 in pairwise comparisons, while Fleiss' kappa was 0.83 for all models. Fracture predictions were visualised with all three methods at different levels of detail, ranking from downsampled image region for classification over bounding box for detection to single pixel-level delineation for segmentation. CONCLUSIONS All three investigated approaches reached high performances for detection of distal radius fractures with simple preprocessing and postprocessing protocols on the custom-trained models. Despite their underlying structural differences, selection of one's fracture analysis AI tool in the frame of this study reduces to the desired flavour of automation: automated classification, AI-assisted manual fracture reading or minimised false negatives.
Collapse
Affiliation(s)
- Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elias Kellner
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Marco Reisert
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| |
Collapse
|
16
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
17
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
18
|
Pauling C, Kanber B, Arthurs OJ, Shelmerdine SC. Commercially available artificial intelligence tools for fracture detection: the evidence. BJR Open 2024; 6:tzad005. [PMID: 38352182 PMCID: PMC10860511 DOI: 10.1093/bjro/tzad005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 09/20/2023] [Accepted: 09/30/2023] [Indexed: 02/16/2024] Open
Abstract
Missed fractures are a costly healthcare issue, not only negatively impacting patient lives, leading to potential long-term disability and time off work, but also responsible for high medicolegal disbursements that could otherwise be used to improve other healthcare services. When fractures are overlooked in children, they are particularly concerning as opportunities for safeguarding may be missed. Assistance from artificial intelligence (AI) in interpreting medical images may offer a possible solution for improving patient care, and several commercial AI tools are now available for radiology workflow implementation. However, information regarding their development, evidence for performance and validation as well as the intended target population is not always clear, but vital when evaluating a potential AI solution for implementation. In this article, we review the range of available products utilizing AI for fracture detection (in both adults and children) and summarize the evidence, or lack thereof, behind their performance. This will allow others to make better informed decisions when deciding which product to procure for their specific clinical requirements.
Collapse
Affiliation(s)
- Cato Pauling
- UCL Great Ormond Street Institute of Child Health, University College London, London WC1E 6BT, United Kingdom
| | - Baris Kanber
- Queen Square Multiple Sclerosis Centre, Department of Neuroinflammation, University College London (UCL) Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London WC1N 3BG, United Kingdom
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom
| | - Owen J Arthurs
- UCL Great Ormond Street Institute of Child Health, University College London, London WC1E 6BT, United Kingdom
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London WC1N 3JH, United Kingdom
- NIHR Great Ormond Street Hospital Biomedical Research Centre, Bloomsbury, London WC1N 1EH, United Kingdom
| | - Susan C Shelmerdine
- UCL Great Ormond Street Institute of Child Health, University College London, London WC1E 6BT, United Kingdom
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London WC1N 3JH, United Kingdom
- NIHR Great Ormond Street Hospital Biomedical Research Centre, Bloomsbury, London WC1N 1EH, United Kingdom
| |
Collapse
|
19
|
Rosa F, Buccicardi D, Romano A, Borda F, D’Auria MC, Gastaldo A. Artificial intelligence and pelvic fracture diagnosis on X-rays: a preliminary study on performance, workflow integration and radiologists' feedback assessment in a spoke emergency hospital. Eur J Radiol Open 2023; 11:100504. [PMID: 37484978 PMCID: PMC10359726 DOI: 10.1016/j.ejro.2023.100504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/20/2023] [Accepted: 06/23/2023] [Indexed: 07/25/2023] Open
Abstract
Purpose The aim of our study is to evaluate artificial intelligence (AI) support in pelvic fracture diagnosis on X-rays, focusing on performance, workflow integration and radiologists' feedback in a spoke emergency hospital. Materials and methods Between August and November 2021, a total of 235 sites of fracture or suspected fracture were evaluated and enrolled in the prospective study. Radiologist's specificity, sensibility accuracy, positive and negative predictive values were compared to AI. Cohen's kappa was used to calculate the agreement between AI and radiologist. We also reviewed the AI workflow integration process, focusing on potential issues and assessed radiologists' opinion on AI via a survey. Results The radiologist performance in accuracy, sensitivity and specificity was better than AI but McNemar test demonstrated no statistically significant difference between AI and radiologist's performance (p = 0.32). Calculated Cohen's K of 0.64. Conclusion Contrary to expectations, our preliminary results did not prove a real improvement of patient outcome nor in reporting time but demonstrated AI high NPV (94,62%) and non-inferiority to radiologist performance. Moreover, the commercially available AI algorithm used in our study automatically learn from data and so we expect a progressive performance improvement. AI could be considered as a promising tool to rule-out fractures (especially when used as a "second reader") and to prioritize positive cases, especially in increasing workload scenarios (ED, nightshifts) but further research is needed to evaluate the real impact on the clinical practice.
Collapse
Affiliation(s)
- Francesca Rosa
- Diagnostic Imaging Department, San Paolo Hospital, ASL 2, via Genova 30, Savona, Italy
- Italian Society of Medical and Interventional Radiology (SIRM), SIRM Foundation, Milan, Italy
| | - Duccio Buccicardi
- Diagnostic Imaging Department, San Paolo Hospital, ASL 2, via Genova 30, Savona, Italy
- Italian Society of Medical and Interventional Radiology (SIRM), SIRM Foundation, Milan, Italy
| | - Adolfo Romano
- Diagnostic Imaging Department, San Paolo Hospital, ASL 2, via Genova 30, Savona, Italy
| | - Fabio Borda
- Department of Health Sciences (DISSAL) – Radiology Section, University of Genoa, 16132 Genoa, Italy
| | - Maria Chiara D’Auria
- Diagnostic Imaging Department, San Paolo Hospital, ASL 2, via Genova 30, Savona, Italy
| | - Alessandro Gastaldo
- Diagnostic Imaging Department, San Paolo Hospital, ASL 2, via Genova 30, Savona, Italy
| |
Collapse
|
20
|
Bennani S, Regnard NE, Ventre J, Lassalle L, Nguyen T, Ducarouge A, Dargent L, Guillo E, Gouhier E, Zaimi SH, Canniff E, Malandrin C, Khafagy P, Koulakian H, Revel MP, Chassagnon G. Using AI to Improve Radiologist Performance in Detection of Abnormalities on Chest Radiographs. Radiology 2023; 309:e230860. [PMID: 38085079 DOI: 10.1148/radiol.230860] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Background Chest radiography remains the most common radiologic examination, and interpretation of its results can be difficult. Purpose To explore the potential benefit of artificial intelligence (AI) assistance in the detection of thoracic abnormalities on chest radiographs by evaluating the performance of radiologists with different levels of expertise, with and without AI assistance. Materials and Methods Patients who underwent both chest radiography and thoracic CT within 72 hours between January 2010 and December 2020 in a French public hospital were screened retrospectively. Radiographs were randomly included until reaching 500 radiographs, with about 50% of radiographs having abnormal findings. A senior thoracic radiologist annotated the radiographs for five abnormalities (pneumothorax, pleural effusion, consolidation, mediastinal and hilar mass, lung nodule) based on the corresponding CT results (ground truth). A total of 12 readers (four thoracic radiologists, four general radiologists, four radiology residents) read half the radiographs without AI and half the radiographs with AI (ChestView; Gleamer). Changes in sensitivity and specificity were measured using paired t tests. Results The study included 500 patients (mean age, 54 years ± 19 [SD]; 261 female, 239 male), with 522 abnormalities visible on 241 radiographs. On average, for all readers, AI use resulted in an absolute increase in sensitivity of 26% (95% CI: 20, 32), 14% (95% CI: 11, 17), 12% (95% CI: 10, 14), 8.5% (95% CI: 6, 11), and 5.9% (95% CI: 4, 8) for pneumothorax, consolidation, nodule, pleural effusion, and mediastinal and hilar mass, respectively (P < .001). Specificity increased with AI assistance (3.9% [95% CI: 3.2, 4.6], 3.7% [95% CI: 3, 4.4], 2.9% [95% CI: 2.3, 3.5], and 2.1% [95% CI: 1.6, 2.6] for pleural effusion, mediastinal and hilar mass, consolidation, and nodule, respectively), except in the diagnosis of pneumothorax (-0.2%; 95% CI: -0.36, -0.04; P = .01). The mean reading time was 81 seconds without AI versus 56 seconds with AI (31% decrease, P < .001). Conclusion AI-assisted chest radiography interpretation resulted in absolute increases in sensitivity for all radiologists of various levels of expertise and reduced the reading times; specificity increased with AI, except in the diagnosis of pneumothorax. © RSNA, 2023 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Souhail Bennani
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Nor-Eddine Regnard
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Jeanne Ventre
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Louis Lassalle
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Toan Nguyen
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Alexis Ducarouge
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Lucas Dargent
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Enora Guillo
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Elodie Gouhier
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Sophie-Hélène Zaimi
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Emma Canniff
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Cécile Malandrin
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Philippe Khafagy
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Hasmik Koulakian
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Marie-Pierre Revel
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| | - Guillaume Chassagnon
- From the Department of Thoracic Imaging, Cochin Hospital, AP-HP, 27 Rue du Faubourg Saint-Jacques, Paris 75014, France (S.B., L.D., E. Guillo, E. Gouhier, S.H.Z., E.C., M.P.R., G.C.); Gleamer, Paris, France (S.B., N.E.R., J.V., L.L., T.N., A.D.); Réseau d'Imagerie Sud Francilien, Lieusant, France (N.E.R., L.L., C.M.); Department of Pediatric Radiology, Armand Trousseau Hospital, AP-HP, Paris, France (T.N.); HFR Fribourg, Fribourg, Switzerland (P.K.); and Centre d'Imagerie Médicale de l'Ouest Parisien, Paris, France (H.K.)
| |
Collapse
|
21
|
Keller G, Rachunek K, Springer F, Kraus M. Evaluation of a newly designed deep learning-based algorithm for automated assessment of scapholunate distance in wrist radiography as a surrogate parameter for scapholunate ligament rupture and the correlation with arthroscopy. LA RADIOLOGIA MEDICA 2023; 128:1535-1541. [PMID: 37726593 PMCID: PMC10700195 DOI: 10.1007/s11547-023-01720-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
PURPOSE Not diagnosed or mistreated scapholunate ligament (SL) tears represent a frequent cause of degenerative wrist arthritis. A newly developed deep learning (DL)-based automated assessment of the SL distance on radiographs may support clinicians in initial image interpretation. MATERIALS AND METHODS A pre-trained DL algorithm was specifically fine-tuned on static and dynamic dorsopalmar wrist radiography (training data set n = 201) for the automated assessment of the SL distance. Afterwards the DL algorithm was evaluated (evaluation data set n = 364 patients with n = 1604 radiographs) and correlated with results of an experienced human reader and with arthroscopic findings. RESULTS The evaluation data set comprised arthroscopically diagnosed SL insufficiency according to Geissler's stages 0-4 (56.5%, 2.5%, 5.5%, 7.5%, 28.0%). Diagnostic accuracy of the DL algorithm on dorsopalmar radiography regarding SL integrity was close to that of the human reader (e.g. differentiation of Geissler's stages ≤ 2 versus > 2 with a sensitivity of 74% and a specificity of 78% compared to 77% and 80%) with a correlation coefficient of 0.81 (P < 0.01). CONCLUSION A DL algorithm like this might become a valuable tool supporting clinicians' initial decision making on radiography regarding SL integrity and consequential triage for further patient management.
Collapse
Affiliation(s)
- Gabriel Keller
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany.
- Department of Diagnostic Radiology, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, Tübingen, Germany.
| | - Katarzyna Rachunek
- Department of Hand, Plastic, Reconstructive and Burn Surgery, BG Trauma Center Tübingen, Eberhard Karls University of Tübingen, 72076, Tübingen, Germany
| | - Fabian Springer
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Department of Diagnostic Radiology, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Mathias Kraus
- Institute of Information Systems, FAU Erlangen-Nuremberg, Nuremberg, Germany
| |
Collapse
|
22
|
Bousson V, Attané G, Benoist N, Perronne L, Diallo A, Hadid-Beurrier L, Martin E, Hamzi L, Depil Duval A, Revue E, Vicaut E, Salvat C. Artificial Intelligence for Detecting Acute Fractures in Patients Admitted to an Emergency Department: Real-Life Performance of Three Commercial Algorithms. Acad Radiol 2023; 30:2118-2139. [PMID: 37468377 DOI: 10.1016/j.acra.2023.06.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/21/2023]
Abstract
RATIONALE AND OBJECTIVES Interpreting radiographs in emergency settings is stressful and a burden for radiologists. The main objective was to assess the performance of three commercially available artificial intelligence (AI) algorithms for detecting acute peripheral fractures on radiographs in daily emergency practice. MATERIALS AND METHODS Radiographs were collected from consecutive patients admitted for skeletal trauma at our emergency department over a period of 2 months. Three AI algorithms-SmartUrgence, Rayvolve, and BoneView-were used to analyze 13 body regions. Four musculoskeletal radiologists determined the ground truth from radiographs. The diagnostic performance of the three AI algorithms was calculated at the level of the radiography set. Accuracies, sensitivities, and specificities for each algorithm and two-by-two comparisons between algorithms were obtained. Analyses were performed for the whole population and for subgroups of interest (sex, age, body region). RESULTS A total of 1210 patients were included (mean age 41.3 ± 18.5 years; 742 [61.3%] men), corresponding to 1500 radiography sets. The fracture prevalence among the radiography sets was 23.7% (356/1500). Accuracy was 90.1%, 71.0%, and 88.8% for SmartUrgence, Rayvolve, and BoneView, respectively; sensitivity 90.2%, 92.6%, and 91.3%, with specificity 92.5%, 70.4%, and 90.5%. Accuracy and specificity were significantly higher for SmartUrgence and BoneView than Rayvolve for the whole population (P < .0001) and for subgroups. The three algorithms did not differ in sensitivity (P = .27). For SmartUrgence, subgroups did not significantly differ in accuracy, specificity, or sensitivity. For Rayvolve, accuracy and specificity were significantly higher with age 27-36 than ≥53 years (P = .0029 and P = .0019). Specificity was higher for the subgroup knee than foot (P = .0149). For BoneView, accuracy was significantly higher for the subgroups knee than foot (P = .0006) and knee than wrist/hand (P = .0228). Specificity was significantly higher for the subgroups knee than foot (P = .0003) and ankle than foot (P = .0195). CONCLUSION The performance of AI detection of acute peripheral fractures in daily radiological practice in an emergency department was good to high and was related to the AI algorithm, patient age, and body region examined.
Collapse
Affiliation(s)
- Valérie Bousson
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.).
| | - Grégoire Attané
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Nicolas Benoist
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Laetitia Perronne
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Abdourahmane Diallo
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Lama Hadid-Beurrier
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| | - Emmanuel Martin
- Information Technology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (E.M.)
| | - Lounis Hamzi
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Arnaud Depil Duval
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.); Emergency Department, Saint-Joseph's Hospital, Paris, France (A.D.D.)
| | - Eric Revue
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.)
| | - Eric Vicaut
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Cécile Salvat
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| |
Collapse
|
23
|
Hosch R, Baldini G, Parmar V, Borys K, Koitka S, Engelke M, Arzideh K, Ulrich M, Nensa F. FHIR-PYrate: a data science friendly Python package to query FHIR servers. BMC Health Serv Res 2023; 23:734. [PMID: 37415138 DOI: 10.1186/s12913-023-09498-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 05/03/2023] [Indexed: 07/08/2023] Open
Abstract
BACKGROUND We present FHIR-PYrate, a Python package to handle the full clinical data collection and extraction process. The software is to be plugged into a modern hospital domain, where electronic patient records are used to handle the entire patient's history. Most research institutes follow the same procedures to build study cohorts, but mainly in a non-standardized and repetitive way. As a result, researchers spend time writing boilerplate code, which could be used for more challenging tasks. METHODS The package can improve and simplify existing processes in the clinical research environment. It collects all needed functionalities into a straightforward interface that can be used to query a FHIR server, download imaging studies and filter clinical documents. The full capacity of the search mechanism of the FHIR REST API is available to the user, leading to a uniform querying process for all resources, thus simplifying the customization of each use case. Additionally, valuable features like parallelization and filtering are included to make it more performant. RESULTS As an exemplary practical application, the package can be used to analyze the prognostic significance of routine CT imaging and clinical data in breast cancer with tumor metastases in the lungs. In this example, the initial patient cohort is first collected using ICD-10 codes. For these patients, the survival information is also gathered. Some additional clinical data is retrieved, and CT scans of the thorax are downloaded. Finally, the survival analysis can be computed using a deep learning model with the CT scans, the TNM staging and positivity of relevant markers as input. This process may vary depending on the FHIR server and available clinical data, and can be customized to cover even more use cases. CONCLUSIONS FHIR-PYrate opens up the possibility to quickly and easily retrieve FHIR data, download image data, and search medical documents for keywords within a Python package. With the demonstrated functionality, FHIR-PYrate opens an easy way to assemble research collectives automatically.
Collapse
Affiliation(s)
- René Hosch
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Giulia Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany.
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Vicky Parmar
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Katarzyna Borys
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Sven Koitka
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Merlin Engelke
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Kamyar Arzideh
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
- Central IT Department, Data Integration Center, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
| | - Moritz Ulrich
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
- Central IT Department, Data Integration Center, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
| | - Felix Nensa
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| |
Collapse
|
24
|
Gasmi I, Calinghen A, Parienti JJ, Belloy F, Fohlen A, Pelage JP. Comparison of diagnostic performance of a deep learning algorithm, emergency physicians, junior radiologists and senior radiologists in the detection of appendicular fractures in children. Pediatr Radiol 2023; 53:1675-1684. [PMID: 36877239 DOI: 10.1007/s00247-023-05621-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 11/21/2022] [Accepted: 01/30/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND Advances have been made in the use of artificial intelligence (AI) in the field of diagnostic imaging, particularly in the detection of fractures on conventional radiographs. Studies looking at the detection of fractures in the pediatric population are few. The anatomical variations and evolution according to the child's age require specific studies of this population. Failure to diagnose fractures early in children may lead to serious consequences for growth. OBJECTIVE To evaluate the performance of an AI algorithm based on deep neural networks toward detecting traumatic appendicular fractures in a pediatric population. To compare sensitivity, specificity, positive predictive value and negative predictive value of different readers and the AI algorithm. MATERIALS AND METHODS This retrospective study conducted on 878 patients younger than 18 years of age evaluated conventional radiographs obtained after recent non-life-threatening trauma. All radiographs of the shoulder, arm, elbow, forearm, wrist, hand, leg, knee, ankle and foot were evaluated. The diagnostic performance of a consensus of radiology experts in pediatric imaging (reference standard) was compared with those of pediatric radiologists, emergency physicians, senior residents and junior residents. The predictions made by the AI algorithm and the annotations made by the different physicians were compared. RESULTS The algorithm predicted 174 fractures out of 182, corresponding to a sensitivity of 95.6%, a specificity of 91.64% and a negative predictive value of 98.76%. The AI predictions were close to that of pediatric radiologists (sensitivity 98.35%) and that of senior residents (95.05%) and were above those of emergency physicians (81.87%) and junior residents (90.1%). The algorithm identified 3 (1.6%) fractures not initially seen by pediatric radiologists. CONCLUSION This study suggests that deep learning algorithms can be useful in improving the detection of fractures in children.
Collapse
Affiliation(s)
- Idriss Gasmi
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Arvin Calinghen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Jean-Jacques Parienti
- GRAM 2.0 EA2656 UNICAEN Normandie, University Hospital, Caen, France
- Department of Clinical Research, Caen University Hospital, Caen, France
| | - Frederique Belloy
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Audrey Fohlen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France
| | - Jean-Pierre Pelage
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France.
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France.
| |
Collapse
|
25
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
26
|
Dreizin D. The American Society of Emergency Radiology (ASER) AI/ML expert panel: inception, mandate, work products, and goals. Emerg Radiol 2023; 30:279-283. [PMID: 37071272 DOI: 10.1007/s10140-023-02135-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 04/19/2023]
Affiliation(s)
- David Dreizin
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma , Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
27
|
ROZWAG C, VALENTINI F, COTTEN A, DEMONDION X, PREUX P, JACQUES T. Elbow trauma in children: development and evaluation of radiological artificial intelligence models. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2023; 6:100029. [PMID: 39077546 PMCID: PMC11265386 DOI: 10.1016/j.redii.2023.100029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 04/24/2023] [Indexed: 07/31/2024]
Abstract
Rationale and Objectives To develop a model using artificial intelligence (A.I.) able to detect post-traumatic injuries on pediatric elbow X-rays then to evaluate its performances in silico and its impact on radiologists' interpretation in clinical practice. Material and Methods A total of 1956 pediatric elbow radiographs performed following a trauma were retrospectively collected from 935 patients aged between 0 and 18 years. Deep convolutional neural networks were trained on these X-rays. The two best models were selected then evaluated on an external test set involving 120 patients, whose X-rays were performed on a different radiological equipment in another time period. Eight radiologists interpreted this external test set without then with the help of the A.I. models . Results Two models stood out: model 1 had an accuracy of 95.8% and an AUROC of 0.983 and model 2 had an accuracy of 90.5% and an AUROC of 0.975. On the external test set, model 1 kept a good accuracy of 82.5% and AUROC of 0.916 while model 2 had a loss of accuracy down to 69.2% and of AUROC to 0.793. Model 1 significantly improved radiologist's sensitivity (0.82 to 0.88, P = 0.016) and accuracy (0.86 to 0.88, P = 0,047) while model 2 significantly decreased specificity of readers (0.86 to 0.83, P = 0.031). Conclusion End-to-end development of a deep learning model to assess post-traumatic injuries on elbow X-ray in children was feasible and showed that models with close metrics in silico can unpredictably lead radiologists to either improve or lower their performances in clinical settings.
Collapse
Affiliation(s)
- Clémence ROZWAG
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Franck VALENTINI
- Université de Lille , Lille, France
- Inria Lille – Nord Europe, équipe Scool, Lille, France
- CNRS UMR 9189 – CRIStAL, Lille, France
- École Centrale de Lille, Lille, France
| | - Anne COTTEN
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Xavier DEMONDION
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Philippe PREUX
- Université de Lille , Lille, France
- Inria Lille – Nord Europe, équipe Scool, Lille, France
- CNRS UMR 9189 – CRIStAL, Lille, France
- École Centrale de Lille, Lille, France
| | - Thibaut JACQUES
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| |
Collapse
|
28
|
Dreizin D, Staziaki PV, Khatri GD, Beckmann NM, Feng Z, Liang Y, Delproposto ZS, Klug M, Spann JS, Sarkar N, Fu Y. Artificial intelligence CAD tools in trauma imaging: a scoping review from the American Society of Emergency Radiology (ASER) AI/ML Expert Panel. Emerg Radiol 2023; 30:251-265. [PMID: 36917287 PMCID: PMC10640925 DOI: 10.1007/s10140-023-02120-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
BACKGROUND AI/ML CAD tools can potentially improve outcomes in the high-stakes, high-volume model of trauma radiology. No prior scoping review has been undertaken to comprehensively assess tools in this subspecialty. PURPOSE To map the evolution and current state of trauma radiology CAD tools along key dimensions of technology readiness. METHODS Following a search of databases, abstract screening, and full-text document review, CAD tool maturity was charted using elements of data curation, performance validation, outcomes research, explainability, user acceptance, and funding patterns. Descriptive statistics were used to illustrate key trends. RESULTS A total of 4052 records were screened, and 233 full-text articles were selected for content analysis. Twenty-one papers described FDA-approved commercial tools, and 212 reported algorithm prototypes. Works ranged from foundational research to multi-reader multi-case trials with heterogeneous external data. Scalable convolutional neural network-based implementations increased steeply after 2016 and were used in all commercial products; however, options for explainability were narrow. Of FDA-approved tools, 9/10 performed detection tasks. Dataset sizes ranged from < 100 to > 500,000 patients, and commercialization coincided with public dataset availability. Cross-sectional torso datasets were uniformly small. Data curation methods with ground truth labeling by independent readers were uncommon. No papers assessed user acceptance, and no method included human-computer interaction. The USA and China had the highest research output and frequency of research funding. CONCLUSIONS Trauma imaging CAD tools are likely to improve patient care but are currently in an early stage of maturity, with few FDA-approved products for a limited number of uses. The scarcity of high-quality annotated data remains a major barrier.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| | - Pedro V Staziaki
- Cardiothoracic Imaging, Department of Radiology, Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | - Garvit D Khatri
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Nicholas M Beckmann
- Memorial Hermann Orthopedic & Spine Hospital, McGovern Medical School at UTHealth, Houston, TX, USA
| | - Zhaoyong Feng
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Zachary S Delproposto
- Division of Emergency Radiology, Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - J Stephen Spann
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Nathan Sarkar
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yunting Fu
- Health Sciences and Human Services Library, University of Maryland, Baltimore, Baltimore, MD, USA
| |
Collapse
|
29
|
Simon S, Fischer B, Rinner A, Hummer A, Frank BJH, Mitterer JA, Huber S, Aichmair A, Schwarz GM, Hofstaetter JG. Body height estimation from automated length measurements on standing long leg radiographs using artificial intelligence. Sci Rep 2023; 13:8504. [PMID: 37231033 DOI: 10.1038/s41598-023-34670-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 05/05/2023] [Indexed: 05/27/2023] Open
Abstract
Artificial-intelligence (AI) allows large-scale analyses of long-leg-radiographs (LLRs). We used this technology to derive an update for the classical regression formulae by Trotter and Gleser, which are frequently used to infer stature based on long-bone measurements. We analyzed calibrated, standing LLRs from 4200 participants taken between 2015 and 2020. Automated landmark placement was conducted using the AI-algorithm LAMA™ and the measurements were used to determine femoral, tibial and total leg-length. Linear regression equations were subsequently derived for stature estimation. The estimated regression equations have a shallower slope and larger intercept in males and females (Femur-male: slope = 2.08, intercept = 77.49; Femur-female: slope = 1.9, intercept = 79.81) compared to the formulae previously derived by Trotter and Gleser 1952 (Femur-male: slope = 2.38, intercept = 61.41; Femur-female: slope = 2.47, intercept = 54.13) and Trotter and Gleser 1958 (Femur-male: slope = 2.32, intercept = 65.53). All long-bone measurements showed a high correlation (r ≥ 0.76) with stature. The linear equations we derived tended to overestimate stature in short persons and underestimate stature in tall persons. The differences in slopes and intercepts from those published by Trotter and Gleser (1952, 1958) may result from an ongoing secular increase in stature. Our study illustrates that AI-algorithms are a promising new tool enabling large-scale measurements.
Collapse
Affiliation(s)
- Sebastian Simon
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
- 2nd Department, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Barbara Fischer
- Unit for Theoretical Biology, Department of Evolutionary Biology, University of Vienna, Djerassiplatz 1, 1030, Vienna, Austria
| | - Alexandra Rinner
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Allan Hummer
- ImageBiopsy Lab GmbH, Zehetnergasse 6/2/2, 1140, Vienna, Austria
| | - Bernhard J H Frank
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Jennyfer A Mitterer
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Stephanie Huber
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
- Center for Anatomy and Cell Biology, Medical University of Vienna, Währingerstraße 13, 1090, Vienna, Austria
| | - Alexander Aichmair
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
- 2nd Department, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Gilbert M Schwarz
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria
- Center for Anatomy and Cell Biology, Medical University of Vienna, Währingerstraße 13, 1090, Vienna, Austria
| | - Jochen G Hofstaetter
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria.
- 2nd Department, Orthopaedic Hospital Vienna-Speising, Speisinger Straße 109, 1130, Vienna, Austria.
| |
Collapse
|
30
|
Anderson PG, Baum GL, Keathley N, Sicular S, Venkatesh S, Sharma A, Daluiski A, Potter H, Hotchkiss R, Lindsey RV, Jones RM. Deep Learning Assistance Closes the Accuracy Gap in Fracture Detection Across Clinician Types. Clin Orthop Relat Res 2023; 481:580-588. [PMID: 36083847 PMCID: PMC9928835 DOI: 10.1097/corr.0000000000002385] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 08/05/2022] [Indexed: 01/31/2023]
Abstract
BACKGROUND Missed fractures are the most common diagnostic errors in musculoskeletal imaging and can result in treatment delays and preventable morbidity. Deep learning, a subfield of artificial intelligence, can be used to accurately detect fractures by training algorithms to emulate the judgments of expert clinicians. Deep learning systems that detect fractures are often limited to specific anatomic regions and require regulatory approval to be used in practice. Once these hurdles are overcome, deep learning systems have the potential to improve clinician diagnostic accuracy and patient care. QUESTIONS/PURPOSES This study aimed to evaluate whether a Food and Drug Administration-cleared deep learning system that identifies fractures in adult musculoskeletal radiographs would improve diagnostic accuracy for fracture detection across different types of clinicians. Specifically, this study asked: (1) What are the trends in musculoskeletal radiograph interpretation by different clinician types in the publicly available Medicare claims data? (2) Does the deep learning system improve clinician accuracy in diagnosing fractures on radiographs and, if so, is there a greater benefit for clinicians with limited training in musculoskeletal imaging? METHODS We used the publicly available Medicare Part B Physician/Supplier Procedure Summary data provided by the Centers for Medicare & Medicaid Services to determine the trends in musculoskeletal radiograph interpretation by clinician type. In addition, we conducted a multiple-reader, multiple-case study to assess whether clinician accuracy in diagnosing fractures on radiographs was superior when aided by the deep learning system compared with when unaided. Twenty-four clinicians (radiologists, orthopaedic surgeons, physician assistants, primary care physicians, and emergency medicine physicians) with a median (range) of 16 years (2 to 37) of experience postresidency each assessed 175 unique musculoskeletal radiographic cases under aided and unaided conditions (4200 total case-physician pairs per condition). These cases were comprised of radiographs from 12 different anatomic regions (ankle, clavicle, elbow, femur, forearm, hip, humerus, knee, pelvis, shoulder, tibia and fibula, and wrist) and were randomly selected from 12 hospitals and healthcare centers. The gold standard for fracture diagnosis was the majority opinion of three US board-certified orthopaedic surgeons or radiologists who independently interpreted the case. The clinicians' diagnostic accuracy was determined by the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, sensitivity, and specificity. Secondary analyses evaluated the fracture miss rate (1-sensitivity) by clinicians with and without extensive training in musculoskeletal imaging. RESULTS Medicare claims data revealed that physician assistants showed the greatest increase in interpretation of musculoskeletal radiographs within the analyzed time period (2012 to 2018), although clinicians with extensive training in imaging (radiologists and orthopaedic surgeons) still interpreted the majority of the musculoskeletal radiographs. Clinicians aided by the deep learning system had higher accuracy diagnosing fractures in radiographs compared with when unaided (unaided AUC: 0.90 [95% CI 0.89 to 0.92]; aided AUC: 0.94 [95% CI 0.93 to 0.95]; difference in least square mean per the Dorfman, Berbaum, Metz model AUC: 0.04 [95% CI 0.01 to 0.07]; p < 0.01). Clinician sensitivity increased when aided compared with when unaided (aided: 90% [95% CI 88% to 92%]; unaided: 82% [95% CI 79% to 84%]), and specificity increased when aided compared with when unaided (aided: 92% [95% CI 91% to 93%]; unaided: 89% [95% CI 88% to 90%]). Clinicians with limited training in musculoskeletal imaging missed a higher percentage of fractures when unaided compared with radiologists (miss rate for clinicians with limited imaging training: 20% [95% CI 17% to 24%]; miss rate for radiologists: 14% [95% CI 9% to 19%]). However, when assisted by the deep learning system, clinicians with limited training in musculoskeletal imaging reduced their fracture miss rate, resulting in a similar miss rate to radiologists (miss rate for clinicians with limited imaging training: 9% [95% CI 7% to 12%]; miss rate for radiologists: 10% [95% CI 6% to 15%]). CONCLUSION Clinicians were more accurate at diagnosing fractures when aided by the deep learning system, particularly those clinicians with limited training in musculoskeletal image interpretation. Reducing the number of missed fractures may allow for improved patient care and increased patient mobility. LEVEL OF EVIDENCE Level III, diagnostic study.
Collapse
Affiliation(s)
| | | | | | - Serge Sicular
- Imagen Technologies, New York, NY, USA
- The Mount Sinai Hospital, New York, NY, USA
| | | | | | | | | | | | | | | |
Collapse
|
31
|
Kim S, Rebmann P, Tran PH, Kellner E, Reisert M, Steybe D, Bayer J, Bamberg F, Kotter E, Russe M. Multiclass datasets expand neural network utility: an example on ankle radiographs. Int J Comput Assist Radiol Surg 2023; 18:819-826. [PMID: 36729290 PMCID: PMC10113347 DOI: 10.1007/s11548-023-02839-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 01/18/2023] [Indexed: 02/03/2023]
Abstract
PURPOSE Artificial intelligence in computer vision has been increasingly adapted in clinical application since the implementation of neural networks, potentially providing incremental information beyond the mere detection of pathology. As its algorithmic approach propagates input variation, neural networks could be used to identify and evaluate relevant image features. In this study, we introduce a basic dataset structure and demonstrate a pertaining use case. METHODS A multidimensional classification of ankle x-rays (n = 1493) rating a variety of features including fracture certainty was used to confirm its usability for separating input variations. We trained a customized neural network on the task of fracture detection using a state-of-the-art preprocessing and training protocol. By grouping the radiographs into subsets according to their image features, the influence of selected features on model performance was evaluated via selective training. RESULTS The models trained on our dataset outperformed most comparable models of current literature with an ROC AUC of 0.943. Excluding ankle x-rays with signs of surgery improved fracture classification performance (AUC 0.955), while limiting the training set to only healthy ankles with and without fracture had no consistent effect. CONCLUSION Using multiclass datasets and comparing model performance, we were able to demonstrate signs of surgery as a confounding factor, which, following elimination, improved our model. Also eliminating pathologies other than fracture in contrast had no effect on model performance, suggesting a beneficial influence of feature variability for robust model training. Thus, multiclass datasets allow for evaluation of distinct image features, deepening our understanding of pathology imaging.
Collapse
Affiliation(s)
- Suam Kim
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Freiburg, Germany
| | - David Steybe
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Medical Center-University of Freiburg, Freiburg, Germany
| | - Jörg Bayer
- Department of Trauma and Orthopaedic Surgery, Schwarzwald-Baar Hospital, Villingen-Schwenningen, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Maximilian Russe
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| |
Collapse
|
32
|
A Prospective Approach to Integration of AI Fracture Detection Software in Radiographs into Clinical Workflow. LIFE (BASEL, SWITZERLAND) 2023; 13:life13010223. [PMID: 36676172 PMCID: PMC9864518 DOI: 10.3390/life13010223] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/07/2023] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Gleamer BoneView© is a commercially available AI algorithm for fracture detection in radiographs. We aim to test if the algorithm can assist in better sensitivity and specificity for fracture detection by residents with prospective integration into clinical workflow. Radiographs with inquiry for fracture initially reviewed by two residents were randomly assigned and included. A preliminary diagnosis of a possible fracture was made. Thereafter, the AI decision on presence and location of possible fractures was shown and changes to diagnosis could be made. Final diagnosis of fracture was made by a board-certified radiologist with over eight years of experience, or if available, cross-sectional imaging. Sensitivity and specificity of the human report, AI diagnosis, and assisted report were calculated in comparison to the final expert diagnosis. 1163 exams in 735 patients were included, with a total of 367 fractures (31.56%). Pure human sensitivity was 84.74%, and AI sensitivity was 86.92%. Thirty-five changes were made after showing AI results, 33 of which resulted in the correct diagnosis, resulting in 25 additionally found fractures. This resulted in a sensitivity of 91.28% for the assisted report. Specificity was 97.11, 84.67, and 97.36%, respectively. AI assistance showed an increase in sensitivity for both residents, without a loss of specificity.
Collapse
|
33
|
Parpaleix A, Parsy C, Cordari M, Mejdoubi M. Assessment of a combined musculoskeletal and chest deep learning-based detection solution in an emergency setting. Eur J Radiol Open 2023; 10:100482. [PMID: 36941993 PMCID: PMC10023863 DOI: 10.1016/j.ejro.2023.100482] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/31/2023] [Accepted: 03/01/2023] [Indexed: 03/12/2023] Open
Abstract
Rationale and objectives Triage and diagnostic deep learning-based support solutions have started to take hold in everyday emergency radiology practice with the hope of alleviating workflows. Although previous works had proven that artificial intelligence (AI) may increase radiologist and/or emergency physician reading performances, they were restricted to finding, bodypart and/or age subgroups, without evaluating a routine emergency workflow composed of chest and musculoskeletal adult and pediatric cases. We aimed at evaluating a multiple musculoskeletal and chest radiographic findings deep learning-based commercial solution on an adult and pediatric emergency workflow, focusing on discrepancies between emergency and radiology physicians. Material and methods This retrospective, monocentric and observational study included 1772 patients who underwent an emergency radiograph between July and October 2020, excluding spine, skull and plain abdomen procedures. Emergency and radiology reports, obtained without AI as part of the clinical workflow, were collected and discordant cases were reviewed to obtain the radiology reference standard. Case-level AI outputs and emergency reports were compared to the reference standard. DeLong and Wald tests were used to compare ROC-AUC and Sensitivity/Specificity, respectively. Results Results showed an overall AI ROC-AUC of 0.954 with no difference across age or body part subgroups. Real-life emergency physicians' sensitivity was 93.7 %, not significantly different to the AI model (P = 0.105), however in 172/1772 (9.7 %) cases misdiagnosed by emergency physicians. In this subset, AI accuracy was 90.1 %. Conclusion This study highlighted that multiple findings AI solution for emergency radiographs is efficient and complementary to emergency physicians, and could help reduce misdiagnosis in the absence of immediate radiological expertize.
Collapse
Affiliation(s)
- Alexandre Parpaleix
- Department of Radiology, Valenciennes General Hospital, Valenciennes, France
- Correspondence to: Département de radiologie, Centre Hospitalier de Valenciennes, 114 Av. Desandrouin, 59300 Valenciennes, France.
| | - Clémence Parsy
- Department of Radiology, Valenciennes General Hospital, Valenciennes, France
| | | | - Mehdi Mejdoubi
- Department of Radiology, Valenciennes General Hospital, Valenciennes, France
| |
Collapse
|
34
|
Hendrix N, Hendrix W, van Dijke K, Maresch B, Maas M, Bollen S, Scholtens A, de Jonge M, Ong LLS, van Ginneken B, Rutten M. Musculoskeletal radiologist-level performance by using deep learning for detection of scaphoid fractures on conventional multi-view radiographs of hand and wrist. Eur Radiol 2023; 33:1575-1588. [PMID: 36380195 PMCID: PMC9935716 DOI: 10.1007/s00330-022-09205-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/19/2022] [Accepted: 09/25/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVES To assess how an artificial intelligence (AI) algorithm performs against five experienced musculoskeletal radiologists in diagnosing scaphoid fractures and whether it aids their diagnosis on conventional multi-view radiographs. METHODS Four datasets of conventional hand, wrist, and scaphoid radiographs were retrospectively acquired at two hospitals (hospitals A and B). Dataset 1 (12,990 radiographs from 3353 patients, hospital A) and dataset 2 (1117 radiographs from 394 patients, hospital B) were used for training and testing a scaphoid localization and laterality classification component. Dataset 3 (4316 radiographs from 840 patients, hospital A) and dataset 4 (688 radiographs from 209 patients, hospital B) were used for training and testing the fracture detector. The algorithm was compared with the radiologists in an observer study. Evaluation metrics included sensitivity, specificity, positive predictive value (PPV), area under the characteristic operating curve (AUC), Cohen's kappa coefficient (κ), fracture localization precision, and reading time. RESULTS The algorithm detected scaphoid fractures with a sensitivity of 72%, specificity of 93%, PPV of 81%, and AUC of 0.88. The AUC of the algorithm did not differ from each radiologist (0.87 [radiologists' mean], p ≥ .05). AI assistance improved five out of ten pairs of inter-observer Cohen's κ agreements (p < .05) and reduced reading time in four radiologists (p < .001), but did not improve other metrics in the majority of radiologists (p ≥ .05). CONCLUSIONS The AI algorithm detects scaphoid fractures on conventional multi-view radiographs at the level of five experienced musculoskeletal radiologists and could significantly shorten their reading time. KEY POINTS • An artificial intelligence algorithm automatically detects scaphoid fractures on conventional multi-view radiographs at the same level of five experienced musculoskeletal radiologists. • There is preliminary evidence that automated scaphoid fracture detection can significantly shorten the reading time of musculoskeletal radiologists.
Collapse
Affiliation(s)
- Nils Hendrix
- Radiology Department, Jeroen Bosch Ziekenhuis, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, the Netherlands.
- Jheronimus Academy of Data Science, Sint Janssingel 92, 5211 DA, 's-Hertogenbosch, the Netherlands.
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands.
| | - Ward Hendrix
- Radiology Department, Jeroen Bosch Ziekenhuis, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Kees van Dijke
- Radiology Department, Noordwest Ziekenhuisgroep, Wilhelminalaan 12, 1815JD, Alkmaar, the Netherlands
| | - Bas Maresch
- Radiology Department, Ziekenhuis Gelderse Vallei, Willy Brandtlaan 10, 6717 RP, Ede, the Netherlands
| | - Mario Maas
- Radiology and Nuclear Medicine Department, Academic Medical Center, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Stijn Bollen
- Radiology Department, Groene Hart Ziekenhuis, Bleulandweg 10, 2803 HH, Gouda, the Netherlands
| | - Alexander Scholtens
- Radiology and Nuclear Medicine Department, Tergooi, Van Riebeeckweg 212, 1213 XZ, Hilversum, the Netherlands
| | - Milko de Jonge
- Radiology Department, St. Antonius Ziekenhuis, Soestwetering 1, 3543 AZ, Utrecht, the Netherlands
| | - Lee-Ling Sharon Ong
- Jheronimus Academy of Data Science, Sint Janssingel 92, 5211 DA, 's-Hertogenbosch, the Netherlands
- Cognitive Science and Artificial Intelligence Department, Tilburg University, Warandelaan 2, 5037 AB, Tilburg, the Netherlands
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Matthieu Rutten
- Radiology Department, Jeroen Bosch Ziekenhuis, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| |
Collapse
|
35
|
Cohen M, Puntonet J, Sanchez J, Kierszbaum E, Crema M, Soyer P, Dion E. Artificial intelligence vs. radiologist: accuracy of wrist fracture detection on radiographs. Eur Radiol 2022; 33:3974-3983. [PMID: 36515712 DOI: 10.1007/s00330-022-09349-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 09/05/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVE To compare the performances of artificial intelligence (AI) to those of radiologists in wrist fracture detection on radiographs. METHODS This retrospective study included 637 patients (1917 radiographs) with wrist trauma between January 2017 and December 2019. The AI software used was a deep neuronal network algorithm. Ground truth was established by three senior musculoskeletal radiologists who compared the initial radiology reports (IRR) made by non-specialized radiologists, the results of AI, and the combination of AI and IRR (IR+AI) RESULTS: A total of 318 fractures were reported by the senior radiologists in 247 patients. Sensitivity of AI (83%; 95% CI: 78-87%) was significantly greater than that of IRR (76%; 95% CI: 70-81%) (p < 0.001). Specificities were similar for AI (96%; 95% CI: 93-97%) and for IRR (96%; 95% CI: 94-98%) (p = 0.80). The combination of AI+IRR had a significantly greater sensitivity (88%; 95% CI: 84-92%) compared to AI and IRR (p < 0.001) and a lower specificity (92%; 95% CI: 89-95%) (p < 0.001). The sensitivity for scaphoid fracture detection was acceptable for AI (84%) and IRR (80%) but poor for the detection of other carpal bones fracture (41% for AI and 26% for IRR). CONCLUSIONS Performance of AI in wrist fracture detection on radiographs is better than that of non-specialized radiologists. The combination of AI and radiologist's analysis yields best performances. KEY POINTS • Artificial intelligence has better performances for wrist fracture detection compared to non-expert radiologists in daily practice. • Performance of artificial intelligence greatly differs depending on the anatomical area. • Sensitivity of artificial intelligence for the detection of carpal bones fractures is 56%.
Collapse
Affiliation(s)
- Mathieu Cohen
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| | - Julien Puntonet
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France.
- Université Paris Cité, F-75006, Paris, France.
| | - Julien Sanchez
- Université Paris Cité, F-75006, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | | | - Michel Crema
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | - Philippe Soyer
- Université Paris Cité, F-75006, Paris, France
- Department of Radiology- Cochin Hospital, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Elisabeth Dion
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| |
Collapse
|
36
|
Yang C, Yang L, Gao GD, Zong HQ, Gao D. Assessment of artificial intelligence-aided reading in the detection of nasal bone fractures. Technol Health Care 2022; 31:1017-1025. [PMID: 36442167 DOI: 10.3233/thc-220501] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
BACKGROUND: Artificial intelligence (AI) technology is a promising diagnostic adjunct in fracture detection. However, few studies describe the improvement of clinicians’ diagnostic accuracy for nasal bone fractures with the aid of AI technology. OBJECTIVE: This study aims to determine the value of the AI model in improving the diagnostic accuracy for nasal bone fractures compared with manual reading. METHODS: A total of 252 consecutive patients who had undergone facial computed tomography (CT) between January 2020 and January 2021 were enrolled in this study. The presence or absence of a nasal bone fracture was determined by two experienced radiologists. An AI algorithm based on the deep-learning algorithm was engineered, trained and validated to detect fractures on CT images. Twenty readers with various experience were invited to read CT images with or without AI. The accuracy, sensitivity and specificity with the aid of the AI model were calculated by the readers. RESULTS: The deep-learning AI model had 84.78% sensitivity, 86.67% specificity, 0.857 area under the curve (AUC) and a 0.714 Youden index in identifying nasal bone fractures. For all readers, regardless of experience, AI-aided reading had higher sensitivity ([94.00 ± 3.17]% vs [83.52 ± 10.16]%, P< 0.001), specificity ([89.75 ± 6.15]% vs [77.55 ± 11.38]%, P< 0.001) and AUC (0.92 ± 0.04 vs 0.81 ± 0.10, P< 0.001) compared with reading without AI. With the aid of AI, the sensitivity, specificity and AUC were significantly improved in readers with 1–5 years or 6–10 years of experience (all P< 0.05, Table 4). For readers with 11–15 years of experience, no evidence suggested that AI could improve sensitivity and AUC (P= 0.124 and 0.152, respectively). CONCLUSION: The AI model might aid less experienced physicians and radiologists in improving their diagnostic performance for the localisation of nasal bone fractures on CT images.
Collapse
Affiliation(s)
- Cun Yang
- Department of Medical Equipment, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Lei Yang
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Guo-Dong Gao
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Hui-Qian Zong
- Department of Medical Equipment, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Duo Gao
- Department of Medical Imaging, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| |
Collapse
|
37
|
Hayashi D, Kompel AJ, Ventre J, Ducarouge A, Nguyen T, Regnard NE, Guermazi A. Automated detection of acute appendicular skeletal fractures in pediatric patients using deep learning. Skeletal Radiol 2022; 51:2129-2139. [PMID: 35522332 DOI: 10.1007/s00256-022-04070-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE We aimed to perform an external validation of an existing commercial AI software program (BoneView™) for the detection of acute appendicular fractures in pediatric patients. MATERIALS AND METHODS In our retrospective study, anonymized radiographic exams of extremities, with or without fractures, from pediatric patients (aged 2-21) were included. Three hundred exams (150 with fractures and 150 without fractures) were included, comprising 60 exams per body part (hand/wrist, elbow/upper arm, shoulder/clavicle, foot/ankle, leg/knee). The Ground Truth was defined by experienced radiologists. A deep learning algorithm interpreted the radiographs for fracture detection, and its diagnostic performance was compared against the Ground Truth, and receiver operating characteristic analysis was done. Statistical analyses included sensitivity per patient (the proportion of patients for whom all fractures were identified) and sensitivity per fracture (the proportion of fractures identified by the AI among all fractures), specificity per patient, and false-positive rate per patient. RESULTS There were 167 boys and 133 girls with a mean age of 10.8 years. For all fractures, sensitivity per patient (average [95% confidence interval]) was 91.3% [85.6, 95.3], specificity per patient was 90.0% [84.0,94.3], sensitivity per fracture was 92.5% [87.0, 96.2], and false-positive rate per patient in patients who had no fracture was 0.11. The patient-wise area under the curve was 0.93 for all fractures. AI diagnostic performance was consistently high across all anatomical locations and different types of fractures except for avulsion fractures (sensitivity per fracture 72.7% [39.0, 94.0]). CONCLUSION The BoneView™ deep learning algorithm provides high overall diagnostic performance for appendicular fracture detection in pediatric patients.
Collapse
Affiliation(s)
- Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA. .,Department of Radiology, Stony Brook University Renaissance School of Medicine, HSc Level 4, Room 120, Stony Brook, NY, 11794, USA.
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA
| | - Jeanne Ventre
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France
| | | | - Toan Nguyen
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Service de Radiopédiatrie, Hôpital Armand-Trousseau, AP-HP, Médecine Sorbonne Université, 26 avenue du Docteur Arnold-Netter, 75012, Paris, France
| | - Nor-Eddine Regnard
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Réseau d'Imagerie Sud Francilien, 2 avenue de Mousseau, 91000, Evry, France
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA.,Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA, 02132, USA
| |
Collapse
|
38
|
Nguyen T, Maarek R, Hermann AL, Kammoun A, Marchi A, Khelifi-Touhami MR, Collin M, Jaillard A, Kompel AJ, Hayashi D, Guermazi A, Le Pointe HD. Assessment of an artificial intelligence aid for the detection of appendicular skeletal fractures in children and young adults by senior and junior radiologists. Pediatr Radiol 2022; 52:2215-2226. [PMID: 36169667 DOI: 10.1007/s00247-022-05496-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND As the number of conventional radiographic examinations in pediatric emergency departments increases, so, too, does the number of reading errors by radiologists. OBJECTIVE The aim of this study is to investigate the ability of artificial intelligence (AI) to improve the detection of fractures by radiologists in children and young adults. MATERIALS AND METHODS A cohort of 300 anonymized radiographs performed for the detection of appendicular fractures in patients ages 2 to 21 years was collected retrospectively. The ground truth for each examination was established after an independent review by two radiologists with expertise in musculoskeletal imaging. Discrepancies were resolved by consensus with a third radiologist. Half of the 300 examinations showed at least 1 fracture. Radiographs were read by three senior pediatric radiologists and five radiology residents in the usual manner and then read again immediately after with the help of AI. RESULTS The mean sensitivity for all groups was 73.3% (110/150) without AI; it increased significantly by almost 10% (P<0.001) to 82.8% (125/150) with AI. For junior radiologists, it increased by 10.3% (P<0.001) and for senior radiologists by 8.2% (P=0.08). On average, there was no significant change in specificity (from 89.6% to 90.3% [+0.7%, P=0.28]); for junior radiologists, specificity increased from 86.2% to 87.6% (+1.4%, P=0.42) and for senior radiologists, it decreased from 95.1% to 94.9% (-0.2%, P=0.23). The stand-alone sensitivity and specificity of the AI were, respectively, 91% and 90%. CONCLUSION With the help of AI, sensitivity increased by an average of 10% without significantly decreasing specificity in fracture detection in a predominantly pediatric population.
Collapse
Affiliation(s)
- Toan Nguyen
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France.
| | - Richard Maarek
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Anne-Laure Hermann
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Amina Kammoun
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Antoine Marchi
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mohamed R Khelifi-Touhami
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mégane Collin
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Aliénor Jaillard
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY, USA
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, VA Boston Healthcare System, West Roxbury, MA, USA
| | - Hubert Ducou Le Pointe
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| |
Collapse
|
39
|
Nakagawa J, Fujima N, Hirata K, Tang M, Tsuneta S, Suzuki J, Harada T, Ikebe Y, Homma A, Kano S, Minowa K, Kudo K. Utility of the deep learning technique for the diagnosis of orbital invasion on CT in patients with a nasal or sinonasal tumor. Cancer Imaging 2022; 22:52. [PMID: 36138422 PMCID: PMC9502604 DOI: 10.1186/s40644-022-00492-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/14/2022] [Indexed: 11/25/2022] Open
Abstract
Background In nasal or sinonasal tumors, orbital invasion beyond periorbita by the tumor is one of the important criteria in the selection of the surgical procedure. We investigated the usefulness of the convolutional neural network (CNN)-based deep learning technique for the diagnosis of orbital invasion, using computed tomography (CT) images. Methods A total of 168 lesions with malignant nasal or sinonasal tumors were divided into a training dataset (n = 119) and a test dataset (n = 49). The final diagnosis (invasion-positive or -negative) was determined by experienced radiologists who carefully reviewed all of the CT images. In a CNN-based deep learning analysis, a slice of the square target region that included the orbital bone wall was extracted and fed into a deep-learning training session to create a diagnostic model using transfer learning with the Visual Geometry Group 16 (VGG16) model. The test dataset was subsequently tested in CNN-based diagnostic models and by two other radiologists who were not specialized in head and neck radiology. At approx. 2 months after the first reading session, two radiologists again reviewed all of the images in the test dataset, referring to the diagnoses provided by the trained CNN-based diagnostic model. Results The diagnostic accuracy was 0.92 by the CNN-based diagnostic models, whereas the diagnostic accuracies by the two radiologists at the first reading session were 0.49 and 0.45, respectively. In the second reading session by two radiologists (diagnosing with the assistance by the CNN-based diagnostic model), marked elevations of the diagnostic accuracy were observed (0.94 and 1.00, respectively). Conclusion The CNN-based deep learning technique can be a useful support tool in assessing the presence of orbital invasion on CT images, especially for non-specialized radiologists.
Collapse
Affiliation(s)
- Junichi Nakagawa
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Nuclear Medicine, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Minghui Tang
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Satonori Tsuneta
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Jun Suzuki
- Department of Radiology, Teine Keijinkai Hospital, 1-40, Maeda 1-12, Teine-ku, Sapporo, Hokkaido, 006-8555, Japan
| | - Taisuke Harada
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Yohei Ikebe
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Center for Cause of Death investigation, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Akihiro Homma
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita ku, Sapporo, 060-8638, Japan
| | - Satoshi Kano
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita ku, Sapporo, 060-8638, Japan
| | - Kazuyuki Minowa
- Faculty of Dental Medicine, Department of Radiology, Hokkaido University, N13 W7, Kita-ku, Sapporo, Hokkaido, 060-8586, Japan
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| |
Collapse
|
40
|
Rohde S, Münnich N. [Artificial intelligence in orthopaedic and trauma surgery imaging]. ORTHOPADIE (HEIDELBERG, GERMANY) 2022; 51:748-756. [PMID: 35980460 DOI: 10.1007/s00132-022-04293-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/25/2022] [Indexed: 06/15/2023]
Abstract
Artificial intelligence (AI) is playing an increasing role in radiological imaging in orthopaedics and trauma surgery. The algorithms available to date are predominantly used in the detection of (occult) fractures and in length and angle measurements in conventional X‑ray images. However, current AI solutions also enable the analysis and pattern recognition of CT datasets, e.g. in the detection of rib or vertebral body fractures. A special application is EOS™ (ATEC Spine Group, Paris, France), which enables a 3‑D simulation of the axial skeleton and semi-automatic length and angle calculations based on a digital 2‑D X‑ray image. In this paper, the current spectrum of AI applications for orthopaedics and trauma surgery is presented and discussed.
Collapse
Affiliation(s)
- Stefan Rohde
- Klinik für Radiologie und Neuroradiologie, Klinikum Dortmund gGmbH, Beurhausstr. 40, 44137, Dortmund, Deutschland.
- Fakultät für Gesundheit, Universität Witten-Herdecke, Witten, Deutschland.
| | - Nico Münnich
- Klinik für Radiologie und Neuroradiologie, Klinikum Dortmund gGmbH, Beurhausstr. 40, 44137, Dortmund, Deutschland
| |
Collapse
|
41
|
Assessment of performances of a deep learning algorithm for the detection of limbs and pelvic fractures, dislocations, focal bone lesions, and elbow effusions on trauma X-rays. Eur J Radiol 2022; 154:110447. [DOI: 10.1016/j.ejrad.2022.110447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 11/23/2022]
|
42
|
Huhtanen JT, Nyman M, Doncenco D, Hamedian M, Kawalya D, Salminen L, Sequeiros RB, Koskinen SK, Pudas TK, Kajander S, Niemi P, Hirvonen J, Aronen HJ, Jafaritadi M. Deep learning accurately classifies elbow joint effusion in adult and pediatric radiographs. Sci Rep 2022; 12:11803. [PMID: 35821056 PMCID: PMC9276721 DOI: 10.1038/s41598-022-16154-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022] Open
Abstract
Joint effusion due to elbow fractures are common among adults and children. Radiography is the most commonly used imaging procedure to diagnose elbow injuries. The purpose of the study was to investigate the diagnostic accuracy of deep convolutional neural network algorithms in joint effusion classification in pediatric and adult elbow radiographs. This retrospective study consisted of a total of 4423 radiographs in a 3-year period from 2017 to 2020. Data was randomly separated into training (n = 2672), validation (n = 892) and test set (n = 859). Two models using VGG16 as the base architecture were trained with either only lateral projection or with four projections (AP, LAT and Obliques). Three radiologists evaluated joint effusion separately on the test set. Accuracy, precision, recall, specificity, F1 measure, Cohen’s kappa, and two-sided 95% confidence intervals were calculated. Mean patient age was 34.4 years (1–98) and 47% were male patients. Trained deep learning framework showed an AUC of 0.951 (95% CI 0.946–0.955) and 0.906 (95% CI 0.89–0.91) for the lateral and four projection elbow joint images in the test set, respectively. Adult and pediatric patient groups separately showed an AUC of 0.966 and 0.924, respectively. Radiologists showed an average accuracy, sensitivity, specificity, precision, F1 score, and AUC of 92.8%, 91.7%, 93.6%, 91.07%, 91.4%, and 92.6%. There were no statistically significant differences between AUC's of the deep learning model and the radiologists (p value > 0.05). The model on the lateral dataset resulted in higher AUC compared to the model with four projection datasets. Using deep learning it is possible to achieve expert level diagnostic accuracy in elbow joint effusion classification in pediatric and adult radiographs. Deep learning used in this study can classify joint effusion in radiographs and can be used in image interpretation as an aid for radiologists.
Collapse
Affiliation(s)
- Jarno T Huhtanen
- Faculty of Health and Well-Being, Turku University of Applied Sciences, Turku, Finland. .,Department of Radiology, University of Turku, Turku, Finland.
| | - Mikko Nyman
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Dorin Doncenco
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Maral Hamedian
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Davis Kawalya
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Leena Salminen
- Department of Nursing Science, University of Turku and Director of Nursing (Part-Time) Turku University Hospital, Turku, Finland
| | | | | | - Tomi K Pudas
- Terveystalo Inc, Jaakonkatu 3, Helsinki, Finland
| | - Sami Kajander
- Department of Radiology, University of Turku, Turku, Finland
| | - Pekka Niemi
- Department of Radiology, University of Turku, Turku, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Mojtaba Jafaritadi
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| |
Collapse
|
43
|
Canoni-Meynet L, Verdot P, Danner A, Calame P, Aubry S. Added value of an artificial intelligence solution for fracture detection in the radiologist's daily trauma emergencies workflow. Diagn Interv Imaging 2022; 103:594-600. [PMID: 35780054 DOI: 10.1016/j.diii.2022.06.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/25/2022] [Accepted: 06/15/2022] [Indexed: 12/30/2022]
Abstract
PURPOSE The main objective of this study was to compare radiologists' performance without and with artificial intelligence (AI) assistance for the detection of bone fractures from trauma emergencies. MATERIALS AND METHODS Five hundred consecutive patients (232 women, 268 men) with a mean age of 37 ± 28 (SD) years (age range: 0.25-99 years) were retrospectively included. Three radiologists independently interpreted radiographs without then with AI assistance after a 1-month minimum washout period. The ground truth was determined by consensus reading between musculoskeletal radiologists and AI results. Patient-wise sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for fracture detection and reading time were compared between unassisted and AI-assisted readings of radiologists. Their performances were also assessed by receiver operating characteristic (ROC) curves. RESULTS AI improved the patient-wise sensitivity of radiologists for fracture detection by 20% (95% confidence interval [CI]: 14-26), P< 0.001) and their specificity by 0.6% (95% CI: -0.9-1.5; P = 0.47). It increased the PPV by 2.9% (95% CI: 0.4-5.4; P = 0.08) and the NPV by 10% (95% CI: 6.8-13.3; P < 0.001). Thanks to AI, the area under the ROC curve for fracture detection of readers increased respectively by 10.6%, 10.2% and 9.9%. Their mean reading time per patient decreased by respectively 10, 16 and 12 s (P < 0.001). CONCLUSIONS AI-assisted radiologists work better and faster compared to unassisted radiologists. AI is of great aid to radiologists in daily trauma emergencies, and could reduce the cost of missed fractures.
Collapse
Affiliation(s)
| | - Pierre Verdot
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Alexis Danner
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Paul Calame
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Sébastien Aubry
- Department of Radiology, CHU de Besancon, Besançon 25030, France; Nanomedicine Laboratory EA4662, Université de Franche-Comté, Besançon 25030, France.
| |
Collapse
|
44
|
Zhang X, Yang Y, Shen YW, Zhang KR, Jiang ZK, Ma LT, Ding C, Wang BY, Meng Y, Liu H. Diagnostic accuracy and potential covariates of artificial intelligence for diagnosing orthopedic fractures: a systematic literature review and meta-analysis. Eur Radiol 2022; 32:7196-7216. [PMID: 35754091 DOI: 10.1007/s00330-022-08956-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/07/2022] [Accepted: 06/08/2022] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To systematically quantify the diagnostic accuracy and identify potential covariates affecting the performance of artificial intelligence (AI) in diagnosing orthopedic fractures. METHODS PubMed, Embase, Web of Science, and Cochrane Library were systematically searched for studies on AI applications in diagnosing orthopedic fractures from inception to September 29, 2021. Pooled sensitivity and specificity and the area under the receiver operating characteristic curves (AUC) were obtained. This study was registered in the PROSPERO database prior to initiation (CRD 42021254618). RESULTS Thirty-nine were eligible for quantitative analysis. The overall pooled AUC, sensitivity, and specificity were 0.96 (95% CI 0.94-0.98), 90% (95% CI 87-92%), and 92% (95% CI 90-94%), respectively. In subgroup analyses, multicenter designed studies yielded higher sensitivity (92% vs. 88%) and specificity (94% vs. 91%) than single-center studies. AI demonstrated higher sensitivity with transfer learning (with vs. without: 92% vs. 87%) or data augmentation (with vs. without: 92% vs. 87%), compared to those without. Utilizing plain X-rays as input images for AI achieved results comparable to CT (AUC 0.96 vs. 0.96). Moreover, AI achieved comparable results to humans (AUC 0.97 vs. 0.97) and better results than non-expert human readers (AUC 0.98 vs. 0.96; sensitivity 95% vs. 88%). CONCLUSIONS AI demonstrated high accuracy in diagnosing orthopedic fractures from medical images. Larger-scale studies with higher design quality are needed to validate our findings. KEY POINTS • Multicenter study design, application of transfer learning, and data augmentation are closely related to improving the performance of artificial intelligence models in diagnosing orthopedic fractures. • Utilizing plain X-rays as input images for AI to diagnose fractures achieved results comparable to CT (AUC 0.96 vs. 0.96). • AI achieved comparable results to humans (AUC 0.97 vs. 0.97) but was superior to non-expert human readers (AUC 0.98 vs. 0.96, sensitivity 95% vs. 88%) in diagnosing fractures.
Collapse
Affiliation(s)
- Xiang Zhang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yi Yang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yi-Wei Shen
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Ke-Rui Zhang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Ze-Kun Jiang
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610000, China
| | - Li-Tai Ma
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Chen Ding
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Bei-Yu Wang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yang Meng
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Hao Liu
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China.
| |
Collapse
|
45
|
Shelmerdine SC, White RD, Liu H, Arthurs OJ, Sebire NJ. Artificial intelligence for radiological paediatric fracture assessment: a systematic review. Insights Imaging 2022; 13:94. [PMID: 35657439 PMCID: PMC9166920 DOI: 10.1186/s13244-022-01234-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/12/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Majority of research and commercial efforts have focussed on use of artificial intelligence (AI) for fracture detection in adults, despite the greater long-term clinical and medicolegal implications of missed fractures in children. The objective of this study was to assess the available literature regarding diagnostic performance of AI tools for paediatric fracture assessment on imaging, and where available, how this compares with the performance of human readers. MATERIALS AND METHODS MEDLINE, Embase and Cochrane Library databases were queried for studies published between 1 January 2011 and 2021 using terms related to 'fracture', 'artificial intelligence', 'imaging' and 'children'. Risk of bias was assessed using a modified QUADAS-2 tool. Descriptive statistics for diagnostic accuracies were collated. RESULTS Nine eligible articles from 362 publications were included, with most (8/9) evaluating fracture detection on radiographs, with the elbow being the most common body part. Nearly all articles used data derived from a single institution, and used deep learning methodology with only a few (2/9) performing external validation. Accuracy rates generated by AI ranged from 88.8 to 97.9%. In two of the three articles where AI performance was compared to human readers, sensitivity rates for AI were marginally higher, but this was not statistically significant. CONCLUSIONS Wide heterogeneity in the literature with limited information on algorithm performance on external datasets makes it difficult to understand how such tools may generalise to a wider paediatric population. Further research using a multicentric dataset with real-world evaluation would help to better understand the impact of these tools.
Collapse
Affiliation(s)
- Susan C. Shelmerdine
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK ,grid.464688.00000 0001 2300 7844Department of Clinical Radiology, St. George’s Hospital, London, UK
| | - Richard D. White
- grid.241103.50000 0001 0169 7725Department of Radiology, University Hospital of Wales, Cardiff, UK
| | - Hantao Liu
- grid.5600.30000 0001 0807 5670School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Owen J. Arthurs
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| | - Neil J. Sebire
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| |
Collapse
|
46
|
Fully automated deep learning for knee alignment assessment in lower extremity radiographs: a cross-sectional diagnostic study. Skeletal Radiol 2022; 51:1249-1259. [PMID: 34773485 DOI: 10.1007/s00256-021-03948-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 10/27/2021] [Accepted: 10/27/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVES Accurate assessment of knee alignment and leg length discrepancy is currently measured manually from standing long-leg radiographs (LLR), a process that is both time consuming and poorly reproducible. The aim was to assess the performance of a commercial available AI software by comparing its outputs with manually performed measurements. MATERIALS AND METHODS The AI was trained on over 15,000 radiographs to measure various clinical angles and lengths from LLRs. We performed a retrospective single-center analysis on 295 LLRs obtained between 2015 and 2020 from male and female patients over 18 years. AI and expert measurements were performed independently. Kellgren-Lawrence score and reading time were assessed. All measurements were compared and non-inferiority, mean-absolute-deviation (sMAD), and intra-class-correlation (ICC) were calculated. RESULTS A total of 295 LLRs from 284 patients (mean age, 65 years (18; 90); 97 (34.2%) men) were analyzed. The AI model produces outputs on 98.0% of the LLRs. Manually annotations were considered as 100% accurate. For each measurement, its divergence was calculated, resulting in an overall accuracy of 89.2% when comparing the AI outputs to the manually measured. AI vs. mean observer revealed an sMAD between 0.39 and 2.19° for angles and 1.45-5.00 mm for lengths. AI showed good reliability in all lengths and angles (ICC ≥ 0.87). Non-inferiority comparing AI to the mean observer revealed an equivalence-index (γ) between 0.54 and 3.03° for angles and - 0.70-1.95 mm for lengths. On average, AI was 130 s faster than clinicians. CONCLUSION Automated measurements of knee alignment and length measurements produced with an AI tool result in reproducible, accurate measures with a time savings compared to manually acquired measurements.
Collapse
|
47
|
Kuo RYL, Harrison C, Curran TA, Jones B, Freethy A, Cussons D, Stewart M, Collins GS, Furniss D. Artificial Intelligence in Fracture Detection: A Systematic Review and Meta-Analysis. Radiology 2022; 304:50-62. [PMID: 35348381 DOI: 10.1148/radiol.211785] [Citation(s) in RCA: 67] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Background Patients with fractures are a common emergency presentation and may be misdiagnosed at radiologic imaging. An increasing number of studies apply artificial intelligence (AI) techniques to fracture detection as an adjunct to clinician diagnosis. Purpose To perform a systematic review and meta-analysis comparing the diagnostic performance in fracture detection between AI and clinicians in peer-reviewed publications and the gray literature (ie, articles published on preprint repositories). Materials and Methods A search of multiple electronic databases between January 2018 and July 2020 (updated June 2021) was performed that included any primary research studies that developed and/or validated AI for the purposes of fracture detection at any imaging modality and excluded studies that evaluated image segmentation algorithms. Meta-analysis with a hierarchical model to calculate pooled sensitivity and specificity was used. Risk of bias was assessed by using a modified Prediction Model Study Risk of Bias Assessment Tool, or PROBAST, checklist. Results Included for analysis were 42 studies, with 115 contingency tables extracted from 32 studies (55 061 images). Thirty-seven studies identified fractures on radiographs and five studies identified fractures on CT images. For internal validation test sets, the pooled sensitivity was 92% (95% CI: 88, 93) for AI and 91% (95% CI: 85, 95) for clinicians, and the pooled specificity was 91% (95% CI: 88, 93) for AI and 92% (95% CI: 89, 92) for clinicians. For external validation test sets, the pooled sensitivity was 91% (95% CI: 84, 95) for AI and 94% (95% CI: 90, 96) for clinicians, and the pooled specificity was 91% (95% CI: 81, 95) for AI and 94% (95% CI: 91, 95) for clinicians. There were no statistically significant differences between clinician and AI performance. There were 22 of 42 (52%) studies that were judged to have high risk of bias. Meta-regression identified multiple sources of heterogeneity in the data, including risk of bias and fracture type. Conclusion Artificial intelligence (AI) and clinicians had comparable reported diagnostic performance in fracture detection, suggesting that AI technology holds promise as a diagnostic adjunct in future clinical practice. Clinical trial registration no. CRD42020186641 © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Cohen and McInnes in this issue.
Collapse
Affiliation(s)
- Rachel Y L Kuo
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Conrad Harrison
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Terry-Ann Curran
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Benjamin Jones
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Alexander Freethy
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - David Cussons
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Max Stewart
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Gary S Collins
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Dominic Furniss
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| |
Collapse
|
48
|
Guermazi A, Tannoury C, Kompel AJ, Murakami AM, Ducarouge A, Gillibert A, Li X, Tournier A, Lahoud Y, Jarraya M, Lacave E, Rahimi H, Pourchot A, Parisien RL, Merritt AC, Comeau D, Regnard NE, Hayashi D. Improving Radiographic Fracture Recognition Performance and Efficiency Using Artificial Intelligence. Radiology 2021; 302:627-636. [PMID: 34931859 DOI: 10.1148/radiol.210937] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Missed fractures are a common cause of diagnostic discrepancy between initial radiographic interpretation and the final read by board-certified radiologists. Purpose To assess the effect of assistance by artificial intelligence (AI) on diagnostic performances of physicians for fractures on radiographs. Materials and Methods This retrospective diagnostic study used the multi-reader, multi-case methodology based on an external multicenter data set of 480 examinations with at least 60 examinations per body region (foot and ankle, knee and leg, hip and pelvis, hand and wrist, elbow and arm, shoulder and clavicle, rib cage, and thoracolumbar spine) between July 2020 and January 2021. Fracture prevalence was set at 50%. The ground truth was determined by two musculoskeletal radiologists, with discrepancies solved by a third. Twenty-four readers (radiologists, orthopedists, emergency physicians, physician assistants, rheumatologists, family physicians) were presented the whole validation data set (n = 480), with and without AI assistance, with a 1-month minimum washout period. The primary analysis had to demonstrate superiority of sensitivity per patient and the noninferiority of specificity per patient at -3% margin with AI aid. Stand-alone AI performance was also assessed using receiver operating characteristic curves. Results A total of 480 patients were included (mean age, 59 years ± 16 [standard deviation]; 327 women). The sensitivity per patient was 10.4% higher (95% CI: 6.9, 13.9; P < .001 for superiority) with AI aid (4331 of 5760 readings, 75.2%) than without AI (3732 of 5760 readings, 64.8%). The specificity per patient with AI aid (5504 of 5760 readings, 95.6%) was noninferior to that without AI aid (5217 of 5760 readings, 90.6%), with a difference of +5.0% (95% CI: +2.0, +8.0; P = .001 for noninferiority). AI shortened the average reading time by 6.3 seconds per examination (95% CI: -12.5, -0.1; P = .046). The sensitivity by patient gain was significant in all regions (+8.0% to +16.2%; P < .05) but shoulder and clavicle and spine (+4.2% and +2.6%; P = .12 and .52). Conclusion AI assistance improved the sensitivity and may even improve the specificity of fracture detection by radiologists and nonradiologists, without lengthening reading time. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Link and Pedoia in this issue.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Chadi Tannoury
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Andrew J Kompel
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Akira M Murakami
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Alexis Ducarouge
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - André Gillibert
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Xinning Li
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Antoine Tournier
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Youmna Lahoud
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Mohamed Jarraya
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Elise Lacave
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Hamza Rahimi
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Aloïs Pourchot
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Robert L Parisien
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Alexander C Merritt
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Douglas Comeau
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Nor-Eddine Regnard
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| | - Daichi Hayashi
- From the Departments of Radiology (A. Guermazi, A.J.K., A.M.M., H.R., A.C.M., D.H.), Orthopaedic Surgery (C.T., X.L.), and Family Medicine (D.C.), Boston University School of Medicine, Boston, Mass; Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A. Guermazi); Gleamer, Paris, France (A.D., A.T., E.L., A.P., N.E.); Department of Biostatistics, CHU Rouen, Rouen, France (A. Gillibert); Department of Rheumatology, Harvard Vanguard Medical Associates, Braintree, Mass (Y.L.); Department of Radiology, Musculoskeletal Division, Massachusetts General Hospital, Harvard Medical School, Boston, Mass 02114 (M.J.); Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France (A.P.); Department of Orthopaedic Surgery, The Mount Sinai Hospital, New York, NY (R.L.P.); University Health Services and Primary Care Sports Medicine, Boston College, Chestnut Hill, Mass (D.C.); and Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY (D.H.)
| |
Collapse
|
49
|
Link TM, Pedoia V. Using AI to Improve Radiographic Fracture Detection. Radiology 2021; 302:637-638. [PMID: 34931864 PMCID: PMC8893176 DOI: 10.1148/radiol.212364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Thomas M. Link
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 400 Parnassus Ave, A-367, San Francisco, CA 94143
| | - Valentina Pedoia
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 400 Parnassus Ave, A-367, San Francisco, CA 94143
| |
Collapse
|
50
|
Prager G, Oliver G, Darbyshire D, Jafar AJN, Body R, Carley SD, Reynard C. Journal Update. Emerg Med J 2021; 38:734-736. [PMID: 34413136 PMCID: PMC8380876 DOI: 10.1136/emermed-2021-211890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 07/29/2021] [Indexed: 11/20/2022]
Affiliation(s)
| | - Govind Oliver
- Ysbyty Gwynedd Emergency Department, Betsi Cadwaladr University Health Board, Bangor, UK
| | - Daniel Darbyshire
- Lancaster Medical School, Lancaster University, Lancaster, Lancashire, UK
- Emergency Department, Salford Royal Hospitals NHS Trust, Salford, Salford, UK
| | | | - Richard Body
- Division of Cardiovascular Sciences, The University of Manchester, Manchester, UK
- Emergency Department, Manchester University NHS Foundation Trust, Manchester, UK
| | - Simon David Carley
- Emergency Department, Manchester University NHS Foundation Trust, Manchester, UK
- Postgraduate Medicine, Manchester Metropolitan University, Manchester, UK
| | - Charles Reynard
- Division of Cardiovascular Sciences, The University of Manchester, Manchester, UK
- Emergency Department, Manchester University NHS Foundation Trust, Manchester, UK
| |
Collapse
|