101
|
D'Angelo T, Caudo D, Blandino A, Albrecht MH, Vogl TJ, Gruenewald LD, Gaeta M, Yel I, Koch V, Martin SS, Lenga L, Muscogiuri G, Sironi S, Mazziotti S, Booz C. Artificial intelligence, machine learning and deep learning in musculoskeletal imaging: Current applications. JOURNAL OF CLINICAL ULTRASOUND : JCU 2022; 50:1414-1431. [PMID: 36069404 DOI: 10.1002/jcu.23321] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/18/2022] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Artificial intelligence is rapidly expanding in all technological fields. The medical field, and especially diagnostic imaging, has been showing the highest developmental potential. Artificial intelligence aims at human intelligence simulation through the management of complex problems. This review describes the technical background of artificial intelligence, machine learning, and deep learning. The first section illustrates the general potential of artificial intelligence applications in the context of request management, data acquisition, image reconstruction, archiving, and communication systems. In the second section, the prospective of dedicated tools for segmentation, lesion detection, automatic diagnosis, and classification of musculoskeletal disorders is discussed.
Collapse
Affiliation(s)
- Tommaso D'Angelo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department of Radiology and Nuclear Medicine, Rotterdam, Netherlands
| | - Danilo Caudo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department or Radiology, IRRCS Centro Neurolesi "Bonino Pulejo", Messina, Italy
| | - Alfredo Blandino
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Moritz H Albrecht
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Leon D Gruenewald
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Michele Gaeta
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Ibrahim Yel
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Vitali Koch
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Simon S Martin
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Lukas Lenga
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Giuseppe Muscogiuri
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, IRCCS Istituto Auxologico Italiano, San Luca Hospital, Milan, Italy
| | - Sandro Sironi
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, ASST Papa Giovanni XXIII Hospital, Bergamo, Italy
| | - Silvio Mazziotti
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Christian Booz
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
102
|
Hayashi D, Kompel AJ, Ventre J, Ducarouge A, Nguyen T, Regnard NE, Guermazi A. Automated detection of acute appendicular skeletal fractures in pediatric patients using deep learning. Skeletal Radiol 2022; 51:2129-2139. [PMID: 35522332 DOI: 10.1007/s00256-022-04070-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE We aimed to perform an external validation of an existing commercial AI software program (BoneView™) for the detection of acute appendicular fractures in pediatric patients. MATERIALS AND METHODS In our retrospective study, anonymized radiographic exams of extremities, with or without fractures, from pediatric patients (aged 2-21) were included. Three hundred exams (150 with fractures and 150 without fractures) were included, comprising 60 exams per body part (hand/wrist, elbow/upper arm, shoulder/clavicle, foot/ankle, leg/knee). The Ground Truth was defined by experienced radiologists. A deep learning algorithm interpreted the radiographs for fracture detection, and its diagnostic performance was compared against the Ground Truth, and receiver operating characteristic analysis was done. Statistical analyses included sensitivity per patient (the proportion of patients for whom all fractures were identified) and sensitivity per fracture (the proportion of fractures identified by the AI among all fractures), specificity per patient, and false-positive rate per patient. RESULTS There were 167 boys and 133 girls with a mean age of 10.8 years. For all fractures, sensitivity per patient (average [95% confidence interval]) was 91.3% [85.6, 95.3], specificity per patient was 90.0% [84.0,94.3], sensitivity per fracture was 92.5% [87.0, 96.2], and false-positive rate per patient in patients who had no fracture was 0.11. The patient-wise area under the curve was 0.93 for all fractures. AI diagnostic performance was consistently high across all anatomical locations and different types of fractures except for avulsion fractures (sensitivity per fracture 72.7% [39.0, 94.0]). CONCLUSION The BoneView™ deep learning algorithm provides high overall diagnostic performance for appendicular fracture detection in pediatric patients.
Collapse
Affiliation(s)
- Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA. .,Department of Radiology, Stony Brook University Renaissance School of Medicine, HSc Level 4, Room 120, Stony Brook, NY, 11794, USA.
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA
| | - Jeanne Ventre
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France
| | | | - Toan Nguyen
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Service de Radiopédiatrie, Hôpital Armand-Trousseau, AP-HP, Médecine Sorbonne Université, 26 avenue du Docteur Arnold-Netter, 75012, Paris, France
| | - Nor-Eddine Regnard
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Réseau d'Imagerie Sud Francilien, 2 avenue de Mousseau, 91000, Evry, France
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA.,Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA, 02132, USA
| |
Collapse
|
103
|
Dankelman LHM, Schilstra S, IJpma FFA, Doornberg JN, Colaris JW, Verhofstad MHJ, Wijffels MME, Prijs J. Artificial intelligence fracture recognition on computed tomography: review of literature and recommendations. Eur J Trauma Emerg Surg 2022; 49:681-691. [PMID: 36284017 PMCID: PMC10175338 DOI: 10.1007/s00068-022-02128-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/02/2022] [Indexed: 11/26/2022]
Abstract
Abstract
Purpose
The use of computed tomography (CT) in fractures is time consuming, challenging and suffers from poor inter-surgeon reliability. Convolutional neural networks (CNNs), a subset of artificial intelligence (AI), may overcome shortcomings and reduce clinical burdens to detect and classify fractures. The aim of this review was to summarize literature on CNNs for the detection and classification of fractures on CT scans, focusing on its accuracy and to evaluate the beneficial role in daily practice.
Methods
Literature search was performed according to the PRISMA statement, and Embase, Medline ALL, Web of Science Core Collection, Cochrane Central Register of Controlled Trials and Google Scholar databases were searched. Studies were eligible when the use of AI for the detection of fractures on CT scans was described. Quality assessment was done with a modified version of the methodologic index for nonrandomized studies (MINORS), with a seven-item checklist. Performance of AI was defined as accuracy, F1-score and area under the curve (AUC).
Results
Of the 1140 identified studies, 17 were included. Accuracy ranged from 69 to 99%, the F1-score ranged from 0.35 to 0.94 and the AUC, ranging from 0.77 to 0.95. Based on ten studies, CNN showed a similar or improved diagnostic accuracy in addition to clinical evaluation only.
Conclusions
CNNs are applicable for the detection and classification fractures on CT scans. This can improve automated and clinician-aided diagnostics. Further research should focus on the additional value of CNN used for CT scans in daily clinics.
Collapse
Affiliation(s)
- Lente H. M. Dankelman
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Sanne Schilstra
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Frank F. A. IJpma
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Job N. Doornberg
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Orthopedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
| | - Joost W. Colaris
- Department of Orthopedics, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Michael H. J. Verhofstad
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Mathieu M. E. Wijffels
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Jasper Prijs
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Orthopedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
| | | |
Collapse
|
104
|
Meena T, Roy S. Bone Fracture Detection Using Deep Supervised Learning from Radiological Images: A Paradigm Shift. Diagnostics (Basel) 2022; 12:diagnostics12102420. [PMID: 36292109 PMCID: PMC9600559 DOI: 10.3390/diagnostics12102420] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 01/16/2023] Open
Abstract
Bone diseases are common and can result in various musculoskeletal conditions (MC). An estimated 1.71 billion patients suffer from musculoskeletal problems worldwide. Apart from musculoskeletal fractures, femoral neck injuries, knee osteoarthritis, and fractures are very common bone diseases, and the rate is expected to double in the next 30 years. Therefore, proper and timely diagnosis and treatment of a fractured patient are crucial. Contrastingly, missed fractures are a common prognosis failure in accidents and emergencies. This causes complications and delays in patients’ treatment and care. These days, artificial intelligence (AI) and, more specifically, deep learning (DL) are receiving significant attention to assist radiologists in bone fracture detection. DL can be widely used in medical image analysis. Some studies in traumatology and orthopaedics have shown the use and potential of DL in diagnosing fractures and diseases from radiographs. In this systematic review, we provide an overview of the use of DL in bone imaging to help radiologists to detect various abnormalities, particularly fractures. We have also discussed the challenges and problems faced in the DL-based method, and the future of DL in bone imaging.
Collapse
|
105
|
Nguyen T, Maarek R, Hermann AL, Kammoun A, Marchi A, Khelifi-Touhami MR, Collin M, Jaillard A, Kompel AJ, Hayashi D, Guermazi A, Le Pointe HD. Assessment of an artificial intelligence aid for the detection of appendicular skeletal fractures in children and young adults by senior and junior radiologists. Pediatr Radiol 2022; 52:2215-2226. [PMID: 36169667 DOI: 10.1007/s00247-022-05496-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND As the number of conventional radiographic examinations in pediatric emergency departments increases, so, too, does the number of reading errors by radiologists. OBJECTIVE The aim of this study is to investigate the ability of artificial intelligence (AI) to improve the detection of fractures by radiologists in children and young adults. MATERIALS AND METHODS A cohort of 300 anonymized radiographs performed for the detection of appendicular fractures in patients ages 2 to 21 years was collected retrospectively. The ground truth for each examination was established after an independent review by two radiologists with expertise in musculoskeletal imaging. Discrepancies were resolved by consensus with a third radiologist. Half of the 300 examinations showed at least 1 fracture. Radiographs were read by three senior pediatric radiologists and five radiology residents in the usual manner and then read again immediately after with the help of AI. RESULTS The mean sensitivity for all groups was 73.3% (110/150) without AI; it increased significantly by almost 10% (P<0.001) to 82.8% (125/150) with AI. For junior radiologists, it increased by 10.3% (P<0.001) and for senior radiologists by 8.2% (P=0.08). On average, there was no significant change in specificity (from 89.6% to 90.3% [+0.7%, P=0.28]); for junior radiologists, specificity increased from 86.2% to 87.6% (+1.4%, P=0.42) and for senior radiologists, it decreased from 95.1% to 94.9% (-0.2%, P=0.23). The stand-alone sensitivity and specificity of the AI were, respectively, 91% and 90%. CONCLUSION With the help of AI, sensitivity increased by an average of 10% without significantly decreasing specificity in fracture detection in a predominantly pediatric population.
Collapse
Affiliation(s)
- Toan Nguyen
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France.
| | - Richard Maarek
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Anne-Laure Hermann
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Amina Kammoun
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Antoine Marchi
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mohamed R Khelifi-Touhami
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mégane Collin
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Aliénor Jaillard
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY, USA
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, VA Boston Healthcare System, West Roxbury, MA, USA
| | - Hubert Ducou Le Pointe
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| |
Collapse
|
106
|
Hill BG, Krogue JD, Jevsevar DS, Schilling PL. Deep Learning and Imaging for the Orthopaedic Surgeon: How Machines "Read" Radiographs. J Bone Joint Surg Am 2022; 104:1675-1686. [PMID: 35867718 DOI: 10.2106/jbjs.21.01387] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
➤ In the not-so-distant future, orthopaedic surgeons will be exposed to machines that begin to automatically "read" medical imaging studies using a technology called deep learning. ➤ Deep learning has demonstrated remarkable progress in the analysis of medical imaging across a range of modalities that are commonly used in orthopaedics, including radiographs, computed tomographic scans, and magnetic resonance imaging scans. ➤ There is a growing body of evidence showing clinical utility for deep learning in musculoskeletal radiography, as evidenced by studies that use deep learning to achieve an expert or near-expert level of performance for the identification and localization of fractures on radiographs. ➤ Deep learning is currently in the very early stages of entering the clinical setting, involving validation and proof-of-concept studies for automated medical image interpretation. ➤ The success of deep learning in the analysis of medical imaging has been propelling the field forward so rapidly that now is the time for surgeons to pause and understand how this technology works at a conceptual level, before (not after) the technology ends up in front of us and our patients. That is the purpose of this article.
Collapse
Affiliation(s)
- Brandon G Hill
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Justin D Krogue
- Google Health, Palo Alto, California.,Department of Orthopaedic Surgery, University of California San Francisco, San Francisco, California
| | - David S Jevsevar
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Peter L Schilling
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| |
Collapse
|
107
|
Artificial Intelligence in Orthopedic Radiography Analysis: A Narrative Review. Diagnostics (Basel) 2022; 12:diagnostics12092235. [PMID: 36140636 PMCID: PMC9498096 DOI: 10.3390/diagnostics12092235] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence (AI) in medicine is a rapidly growing field. In orthopedics, the clinical implementations of AI have not yet reached their full potential. Deep learning algorithms have shown promising results in computed radiographs for fracture detection, classification of OA, bone age, as well as automated measurements of the lower extremities. Studies investigating the performance of AI compared to trained human readers often show equal or better results, although human validation is indispensable at the current standards. The objective of this narrative review is to give an overview of AI in medicine and summarize the current applications of AI in orthopedic radiography imaging. Due to the different AI software and study design, it is difficult to find a clear structure in this field. To produce more homogeneous studies, open-source access to AI software codes and a consensus on study design should be aimed for.
Collapse
|
108
|
Koska OI, Çilengir AH, Uluç ME, Yücel A, Tosun Ö. All-star approach to a small medical imaging dataset: combined deep, transfer, and classical machine learning approaches for the determination of radial head fractures. Acta Radiol 2022; 64:1476-1483. [PMID: 36062584 DOI: 10.1177/02841851221122424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Radial head fractures are often evaluated in emergency departments and can easily be missed. Automated or semi-automated detection methods that help physicians may be valuable regarding the high miss rate. PURPOSE To evaluate the accuracy of combined deep, transfer, and classical machine learning approaches on a small dataset for determination of radial head fractures. MATERIAL AND METHODS A total of 48 patients with radial head fracture and 56 patients without fracture on elbow radiographs were retrospectively evaluated. The input images were obtained by cropping anteroposterior elbow radiographs around a center-point on the radial head. For fracture determination, an algorithm based on feature extraction using distinct prototypes of pretrained networks (VGG16, ResNet50, InceptionV3, MobileNetV2) representing four different approaches was developed. Reduction of feature space dimensions, feeding the most relevant features, and development of ensemble of classifiers were utilized. RESULTS The algorithm with the best performance consisted of preprocessing the input, computation of global maximum and global mean outputs of four distinct pretrained networks, dimensionality reduction by applying univariate and ensemble feature selectors, and applying Support Vector Machines and Random Forest classifiers to the transformed and reduced dataset. A maximum accuracy of 90% with MobileNetV2 pretrained features was reached for fracture determination with a small sample size. CONCLUSION Radial head fractures can be determined with a combined approach and limitations of the small sample size can be overcome by utilizing pretrained deep networks with classical machine learning methods.
Collapse
Affiliation(s)
- Ozgur I Koska
- Department of Biomedical Engineering, 37508Dokuz Eylül University Engineering Faculty, İzmir, Turkey.,ETHZ Computer Vision Laboratory, Zurich, Switzerland
| | | | - Muhsin Engin Uluç
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| | - Aylin Yücel
- 534521Department of Radiology, Afyonkarahisar Health Sciences University, Afyonkarahisar, Turkey
| | - Özgür Tosun
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| |
Collapse
|
109
|
Assessment of performances of a deep learning algorithm for the detection of limbs and pelvic fractures, dislocations, focal bone lesions, and elbow effusions on trauma X-rays. Eur J Radiol 2022; 154:110447. [DOI: 10.1016/j.ejrad.2022.110447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 11/23/2022]
|
110
|
Tseng T, Chen Y, Yeh Y, Kuo C, Fan T, Lin Y. Automatic prosthetic‐parameter estimation from anteroposterior pelvic radiographs after total hip arthroplasty using deep learning‐based keypoint detection. Int J Med Robot 2022; 18:e2394. [DOI: 10.1002/rcs.2394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 03/15/2022] [Accepted: 03/16/2022] [Indexed: 11/05/2022]
Affiliation(s)
- Tsung‐Wei Tseng
- Department of Orthopaedic Surgery Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
- Bone and Joint Research Center Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
| | - Yueh‐Peng Chen
- Center for Artificial Intelligence in Medicine Chang Gung Memorial Hospital Linkou Medical Center Taoyuan Taiwan
| | - Yu‐Cheng Yeh
- Department of Orthopaedic Surgery Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
- Bone and Joint Research Center Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
| | - Chang‐Fu Kuo
- Center for Artificial Intelligence in Medicine Chang Gung Memorial Hospital Linkou Medical Center Taoyuan Taiwan
| | - Tzuo‐Yau Fan
- Center for Artificial Intelligence in Medicine Chang Gung Memorial Hospital Linkou Medical Center Taoyuan Taiwan
| | - Yu‐Chih Lin
- Department of Orthopaedic Surgery Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
- Bone and Joint Research Center Chang Gung Memorial Hospital (CGMH) Taoyuan Taiwan
| |
Collapse
|
111
|
Artificial Intelligence Accurately Detects Traumatic Thoracolumbar Fractures on Sagittal Radiographs. MEDICINA (KAUNAS, LITHUANIA) 2022; 58:medicina58080998. [PMID: 35893113 PMCID: PMC9330443 DOI: 10.3390/medicina58080998] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 07/20/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022]
Abstract
Background and Objectives: Commonly being the first step in trauma routine imaging, up to 67% fractures are missed on plain radiographs of the thoracolumbar (TL) spine. The aim of this study was to develop a deep learning model that detects traumatic fractures on sagittal radiographs of the TL spine. Identifying vertebral fractures in simple radiographic projections would have a significant clinical and financial impact, especially for low- and middle-income countries where computed tomography (CT) and magnetic resonance imaging (MRI) are not readily available and could help select patients that need second level imaging, thus improving the cost-effectiveness. Materials and Methods: Imaging studies (radiographs, CT, and/or MRI) of 151 patients were used. An expert group of three spinal surgeons reviewed all available images to confirm presence and type of fractures. In total, 630 single vertebra images were extracted from the sagittal radiographs of the 151 patients—302 exhibiting a vertebral body fracture, and 328 exhibiting no fracture. Following augmentation, these single vertebra images were used to train, validate, and comparatively test two deep learning convolutional neural network models, namely ResNet18 and VGG16. A heatmap analysis was then conducted to better understand the predictions of each model. Results: ResNet18 demonstrated a better performance, achieving higher sensitivity (91%), specificity (89%), and accuracy (88%) compared to VGG16 (90%, 83%, 86%). In 81% of the cases, the “warm zone” in the heatmaps correlated with the findings, suggestive of fracture within the vertebral body seen in the imaging studies. Vertebras T12 to L2 were the most frequently involved, accounting for 48% of the fractures. A4, A3, and A1 were the most frequent fracture types according to the AO Spine Classification. Conclusions: ResNet18 could accurately identify the traumatic vertebral fractures on the TL sagittal radiographs. In most cases, the model based its prediction on the same areas that human expert classifiers used to determine the presence of a fracture.
Collapse
|
112
|
Karanam SR, Srinivas Y, Chakravarty S. A systematic approach to diagnosis and categorization of bone fractures in X-Ray imagery. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2097765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | - Y. Srinivas
- Information Technology, GITAM University, Visakhapatnam, India
| | - S. Chakravarty
- Information Technology, GITAM University, Visakhapatnam, India
| |
Collapse
|
113
|
Huhtanen JT, Nyman M, Doncenco D, Hamedian M, Kawalya D, Salminen L, Sequeiros RB, Koskinen SK, Pudas TK, Kajander S, Niemi P, Hirvonen J, Aronen HJ, Jafaritadi M. Deep learning accurately classifies elbow joint effusion in adult and pediatric radiographs. Sci Rep 2022; 12:11803. [PMID: 35821056 PMCID: PMC9276721 DOI: 10.1038/s41598-022-16154-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022] Open
Abstract
Joint effusion due to elbow fractures are common among adults and children. Radiography is the most commonly used imaging procedure to diagnose elbow injuries. The purpose of the study was to investigate the diagnostic accuracy of deep convolutional neural network algorithms in joint effusion classification in pediatric and adult elbow radiographs. This retrospective study consisted of a total of 4423 radiographs in a 3-year period from 2017 to 2020. Data was randomly separated into training (n = 2672), validation (n = 892) and test set (n = 859). Two models using VGG16 as the base architecture were trained with either only lateral projection or with four projections (AP, LAT and Obliques). Three radiologists evaluated joint effusion separately on the test set. Accuracy, precision, recall, specificity, F1 measure, Cohen’s kappa, and two-sided 95% confidence intervals were calculated. Mean patient age was 34.4 years (1–98) and 47% were male patients. Trained deep learning framework showed an AUC of 0.951 (95% CI 0.946–0.955) and 0.906 (95% CI 0.89–0.91) for the lateral and four projection elbow joint images in the test set, respectively. Adult and pediatric patient groups separately showed an AUC of 0.966 and 0.924, respectively. Radiologists showed an average accuracy, sensitivity, specificity, precision, F1 score, and AUC of 92.8%, 91.7%, 93.6%, 91.07%, 91.4%, and 92.6%. There were no statistically significant differences between AUC's of the deep learning model and the radiologists (p value > 0.05). The model on the lateral dataset resulted in higher AUC compared to the model with four projection datasets. Using deep learning it is possible to achieve expert level diagnostic accuracy in elbow joint effusion classification in pediatric and adult radiographs. Deep learning used in this study can classify joint effusion in radiographs and can be used in image interpretation as an aid for radiologists.
Collapse
Affiliation(s)
- Jarno T Huhtanen
- Faculty of Health and Well-Being, Turku University of Applied Sciences, Turku, Finland. .,Department of Radiology, University of Turku, Turku, Finland.
| | - Mikko Nyman
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Dorin Doncenco
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Maral Hamedian
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Davis Kawalya
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Leena Salminen
- Department of Nursing Science, University of Turku and Director of Nursing (Part-Time) Turku University Hospital, Turku, Finland
| | | | | | - Tomi K Pudas
- Terveystalo Inc, Jaakonkatu 3, Helsinki, Finland
| | - Sami Kajander
- Department of Radiology, University of Turku, Turku, Finland
| | - Pekka Niemi
- Department of Radiology, University of Turku, Turku, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Mojtaba Jafaritadi
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| |
Collapse
|
114
|
Tanzi L, Audisio A, Cirrincione G, Aprato A, Vezzetti E. Vision Transformer for femur fracture classification. Injury 2022; 53:2625-2634. [PMID: 35469638 DOI: 10.1016/j.injury.2022.04.013] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 04/01/2022] [Accepted: 04/15/2022] [Indexed: 02/02/2023]
Abstract
INTRODUCTION In recent years, the scientific community focused on developing Computer-Aided Diagnosis (CAD) tools that could improve clinicians' bone fractures diagnosis, primarily based on Convolutional Neural Networks (CNNs). However, the discerning accuracy of fractures' subtypes was far from optimal. The aim of the study was 1) to evaluate a new CAD system based on Vision Transformers (ViT), a very recent and powerful deep learning technique, and 2) to assess whether clinicians' diagnostic accuracy could be improved using this system. MATERIALS AND METHODS 4207 manually annotated images were used and distributed, by following the AO/OTA classification, in different fracture types. The ViT architecture was used and compared with a classic CNN and a multistage architecture composed of successive CNNs. To demonstrate the reliability of this approach, (1) the attention maps were used to visualize the most relevant areas of the images, (2) the performance of a generic CNN and ViT was compared through unsupervised learning techniques, and (3) 11 clinicians were asked to evaluate and classify 150 proximal femur fractures' images with and without the help of the ViT, then results were compared for potential improvement. RESULTS The ViT was able to predict 83% of the test images correctly. Precision, recall and F1-score were 0.77 (CI 0.64-0.90), 0.76 (CI 0.62-0.91) and 0.77 (CI 0.64-0.89), respectively. The clinicians' diagnostic improvement was 29% (accuracy 97%; p 0.003) when supported by ViT's predictions, outperforming the algorithm alone. CONCLUSIONS This paper showed the potential of Vision Transformers in bone fracture classification. For the first time, good results were obtained in sub-fractures classification, outperforming the state of the art. Accordingly, the assisted diagnosis yielded the best results, proving the effectiveness of collaborative work between neural networks and clinicians.
Collapse
Affiliation(s)
- Leonardo Tanzi
- DIGEP, Polytechnic University of Turin, Corso Duca degli Abruzzi 24, Torino 10129, Italy.
| | - Andrea Audisio
- School of Medicine, University of Turin, Torino 10133, Italy
| | | | | | - Enrico Vezzetti
- DIGEP, Polytechnic University of Turin, Corso Duca degli Abruzzi 24, Torino 10129, Italy
| |
Collapse
|
115
|
Joshi D, Singh TP, Joshi AK. Deep learning-based localization and segmentation of wrist fractures on X-ray radiographs. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07510-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
116
|
Canoni-Meynet L, Verdot P, Danner A, Calame P, Aubry S. Added value of an artificial intelligence solution for fracture detection in the radiologist's daily trauma emergencies workflow. Diagn Interv Imaging 2022; 103:594-600. [PMID: 35780054 DOI: 10.1016/j.diii.2022.06.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/25/2022] [Accepted: 06/15/2022] [Indexed: 12/30/2022]
Abstract
PURPOSE The main objective of this study was to compare radiologists' performance without and with artificial intelligence (AI) assistance for the detection of bone fractures from trauma emergencies. MATERIALS AND METHODS Five hundred consecutive patients (232 women, 268 men) with a mean age of 37 ± 28 (SD) years (age range: 0.25-99 years) were retrospectively included. Three radiologists independently interpreted radiographs without then with AI assistance after a 1-month minimum washout period. The ground truth was determined by consensus reading between musculoskeletal radiologists and AI results. Patient-wise sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for fracture detection and reading time were compared between unassisted and AI-assisted readings of radiologists. Their performances were also assessed by receiver operating characteristic (ROC) curves. RESULTS AI improved the patient-wise sensitivity of radiologists for fracture detection by 20% (95% confidence interval [CI]: 14-26), P< 0.001) and their specificity by 0.6% (95% CI: -0.9-1.5; P = 0.47). It increased the PPV by 2.9% (95% CI: 0.4-5.4; P = 0.08) and the NPV by 10% (95% CI: 6.8-13.3; P < 0.001). Thanks to AI, the area under the ROC curve for fracture detection of readers increased respectively by 10.6%, 10.2% and 9.9%. Their mean reading time per patient decreased by respectively 10, 16 and 12 s (P < 0.001). CONCLUSIONS AI-assisted radiologists work better and faster compared to unassisted radiologists. AI is of great aid to radiologists in daily trauma emergencies, and could reduce the cost of missed fractures.
Collapse
Affiliation(s)
| | - Pierre Verdot
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Alexis Danner
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Paul Calame
- Department of Radiology, CHU de Besancon, Besançon 25030, France
| | - Sébastien Aubry
- Department of Radiology, CHU de Besancon, Besançon 25030, France; Nanomedicine Laboratory EA4662, Université de Franche-Comté, Besançon 25030, France.
| |
Collapse
|
117
|
Wang Y, Li Y, Lin G, Zhang Q, Zhong J, Zhang Y, Ma K, Zheng Y, Lu G, Zhang Z. Lower-extremity fatigue fracture detection and grading based on deep learning models of radiographs. Eur Radiol 2022; 33:555-565. [PMID: 35748901 DOI: 10.1007/s00330-022-08950-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 05/18/2022] [Accepted: 06/08/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To identify the feasibility of deep learning-based diagnostic models for detecting and assessing lower-extremity fatigue fracture severity on plain radiographs. METHODS This retrospective study enrolled 1151 X-ray images (tibiofibula/foot: 682/469) of fatigue fractures and 2842 X-ray images (tibiofibula/foot: 2000/842) without abnormal presentations from two clinical centers. After labeling the lesions, images in a center (tibiofibula/foot: 2539/1180) were allocated at 7:1:2 for model construction, and the remaining images from another center (tibiofibula/foot: 143/131) for external validation. A ResNet-50 and a triplet branch network were adopted to construct diagnostic models for detecting and grading. The performances of detection models were evaluated with sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), while grading models were evaluated with accuracy by confusion matrix. Visual estimations by radiologists were performed for comparisons with models. RESULTS For the detection model on tibiofibula, a sensitivity of 95.4%/85.5%, a specificity of 80.1%/77.0%, and an AUC of 0.965/0.877 were achieved in the internal testing/external validation set. The detection model on foot reached a sensitivity of 96.4%/90.8%, a specificity of 76.0%/66.7%, and an AUC of 0.947/0.911. The detection models showed superior performance to the junior radiologist, comparable to the intermediate or senior radiologist. The overall accuracy of the diagnostic model was 78.5%/62.9% for tibiofibula and 74.7%/61.1% for foot in the internal testing/external validation set. CONCLUSIONS The deep learning-based models could be applied to the radiological diagnosis of plain radiographs for assisting in the detection and grading of fatigue fractures on tibiofibula and foot. KEY POINTS • Fatigue fractures on radiographs are relatively difficult to detect, and apt to be misdiagnosed. • Detection and grading models based on deep learning were constructed on a large cohort of radiographs with lower-extremity fatigue fractures. • The detection model with high sensitivity would help to reduce the misdiagnosis of lower-extremity fatigue fractures.
Collapse
Affiliation(s)
- Yanping Wang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | | | - Guang Lin
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Qirui Zhang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Jing Zhong
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China
| | - Yan Zhang
- Department of Radiology, Nanjing Qinhuai Medical Area, Jinling Hospital, 210002, Nanjing, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, 518000, China
| | | | - Guangming Lu
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China.,State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing, 210093, China
| | - Zhiqiang Zhang
- Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, 305 East Zhongshan Rd, Nanjing, 210002, China. .,State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing, 210093, China.
| |
Collapse
|
118
|
Agrawal A. Emergency Teleradiology-Past, Present, and, Is There a Future? FRONTIERS IN RADIOLOGY 2022; 2:866643. [PMID: 37492686 PMCID: PMC10365018 DOI: 10.3389/fradi.2022.866643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 05/16/2022] [Indexed: 07/27/2023]
Abstract
Emergency radiology has evolved into a distinct radiology subspecialty requiring a specialized skillset to make a timely and accurate diagnosis of acutely and critically ill or traumatized patients. The need for emergency and odd hour radiology coverage fuelled the growth of internal and external teleradiology and the "nighthawk" services to meet the increasing demands from all stakeholders and support the changing trends in emergency medicine and trauma surgery inclined toward increased reliance on imaging. However, the basic issues of increased imaging workload, radiologist demand-supply mismatch, complex imaging protocols are only partially addressed by teleradiology with the promise of workload balancing by operations to scale. Incorporation of artificially intelligent tools helps scale manifold by the promise of streamlining the workflow, improved detection and quantification as well as prediction. The future of emergency teleradiologists and teleradiology groups is entwined with their ability to incorporate such tools at scale and adapt to newer workflows and different roles. This agility to adopt and adapt would determine their future.
Collapse
|
119
|
Lin KY, Li YT, Han JY, Wu CC, Chu CM, Peng SY, Yeh TT. Deep Learning to Detect Triangular Fibrocartilage Complex Injury in Wrist MRI: Retrospective Study with Internal and External Validation. J Pers Med 2022; 12:jpm12071029. [PMID: 35887524 PMCID: PMC9322609 DOI: 10.3390/jpm12071029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/15/2022] [Accepted: 06/21/2022] [Indexed: 11/16/2022] Open
Abstract
Objective: To use deep learning to predict the probability of triangular fibrocartilage complex (TFCC) injury in patients’ MRI scans. Methods: We retrospectively studied medical records over 11 years and 2 months (1 January 2009–29 February 2019), collecting 332 contrast-enhanced hand MRI scans showing TFCC injury (143 scans) or not (189 scans) from a general hospital. We employed two convolutional neural networks with the MRNet (Algorithm 1) and ResNet50 (Algorithm 2) framework for deep learning. Explainable artificial intelligence was used for heatmap analysis. We tested deep learning using an external dataset containing the MRI scans of 12 patients with TFCC injuries and 38 healthy subjects. Results: In the internal dataset, Algorithm 1 had an AUC of 0.809 (95% confidence interval—CI: 0.670–0.947) for TFCC injury detection as well as an accuracy, sensitivity, and specificity of 75.6% (95% CI: 0.613–0.858), 66.7% (95% CI: 0.438–0.837), and 81.5% (95% CI: 0.633–0.918), respectively, and an F1 score of 0.686. Algorithm 2 had an AUC of 0.871 (95% CI: 0.747–0.995) for TFCC injury detection and an accuracy, sensitivity, and specificity of 90.7% (95% CI: 0.787–0.962), 88.2% (95% CI: 0.664–0.966), and 92.3% (95% CI: 0.763–0.978), respectively, and an F1 score of 0.882. The accuracy, sensitivity, and specificity for radiologist 1 were 88.9, 94.4 and 85.2%, respectively, and for radiologist 2, they were 71.1, 100 and 51.9%, respectively. Conclusions: A modified MRNet framework enables the detection of TFCC injury and guides accurate diagnosis.
Collapse
Affiliation(s)
- Kun-Yi Lin
- Department of Orthopedics, Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Chenggong Rd., Neihu District, Taipei 11490, Taiwan; (K.-Y.L.); (C.-C.W.)
| | - Yuan-Ta Li
- Department of Surgery, Tri-Service General Hospital Penghu Branch, National Defense Medical Center, Penghu 88056, Taiwan;
| | - Juin-Yi Han
- Graduate Institute of Technology, Innovation and Intellectual Property Management, National Cheng Chi University, Taipei 11605, Taiwan;
| | - Chia-Chun Wu
- Department of Orthopedics, Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Chenggong Rd., Neihu District, Taipei 11490, Taiwan; (K.-Y.L.); (C.-C.W.)
| | - Chi-Min Chu
- School of Public Health, National Defense Medical Center, Taipei 11490, Taiwan;
| | - Shao-Yu Peng
- Department of Animal Science, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan;
| | - Tsu-Te Yeh
- Department of Orthopedics, Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Chenggong Rd., Neihu District, Taipei 11490, Taiwan; (K.-Y.L.); (C.-C.W.)
- Correspondence: ; Tel.: +886-2-87923311 or +886-2-87927185; Fax: +886-2-87927186
| |
Collapse
|
120
|
Tiwari A, Poduval M, Bagaria V. Evaluation of artificial intelligence models for osteoarthritis of the knee using deep learning algorithms for orthopedic radiographs. World J Orthop 2022; 13:603-614. [PMID: 35949704 PMCID: PMC9244962 DOI: 10.5312/wjo.v13.i6.603] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/20/2022] [Accepted: 05/14/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Deep learning, a form of artificial intelligence, has shown promising results for interpreting radiographs. In order to develop this niche machine learning (ML) program of interpreting orthopedic radiographs with accuracy, a project named deep learning algorithm for orthopedic radiographs was conceived. In the first phase, the diagnosis of knee osteoarthritis (KOA) as per the standard Kellgren-Lawrence (KL) scale in medical images was conducted using the deep learning algorithm for orthopedic radiographs.
AIM To compare efficacy and accuracy of eight different transfer learning deep learning models for detecting the grade of KOA from a radiograph and identify the most appropriate ML-based model for the detecting grade of KOA.
METHODS The study was performed on 2068 radiograph exams conducted at the Department of Orthopedic Surgery, Sir HN Reliance Hospital and Research Centre (Mumbai, India) during 2019-2021. Three orthopedic surgeons reviewed these independently, graded them for the severity of KOA as per the KL scale and settled disagreement through a consensus session. Eight models, namely ResNet50, VGG-16, InceptionV3, MobilnetV2, EfficientnetB7, DenseNet201, Xception and NasNetMobile, were used to evaluate the efficacy of ML in accurately classifying radiographs for KOA as per the KL scale. Out of the 2068 images, 70% were used initially to train the model, 10% were used subsequently to test the model, and 20% were used finally to determine the accuracy of and validate each model. The idea behind transfer learning for KOA grade image classification is that if the existing models are already trained on a large and general dataset, these models will effectively serve as generic models to fulfill the study’s objectives. Finally, in order to benchmark the efficacy, the results of the models were also compared to a first-year orthopedic trainee who independently classified these models according to the KL scale.
RESULTS Our network yielded an overall high accuracy for detecting KOA, ranging from 54% to 93%. The most successful of these was the DenseNet model, with accuracy up to 93%; interestingly, it even outperformed the human first-year trainee who had an accuracy of 74%.
CONCLUSION The study paves the way for extrapolating the learning using ML to develop an automated KOA classification tool and enable healthcare professionals with better decision-making.
Collapse
Affiliation(s)
- Anjali Tiwari
- Department ofOrthopedics, Sir H. N. Reliance Foundation Hospital and Research Centre, Mumbai 400004, India
| | - Murali Poduval
- Lifesciences Engineering, Tata Consultancy Services, Mumbai 400096, India
| | - Vaibhav Bagaria
- Department ofOrthopedics, Sir H. N. Reliance Foundation Hospital and Research Centre, Mumbai 400004, India
- Department ofOrthopedics, Columbia Asia Hospital, Mumbai 400004, India
| |
Collapse
|
121
|
Klontzas ME, Karantanas AH. Research in Musculoskeletal Radiology: Setting Goals and Strategic Directions. Semin Musculoskelet Radiol 2022; 26:354-358. [PMID: 35654100 DOI: 10.1055/s-0042-1748319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The future of musculoskeletal (MSK) radiology is being built on research developments in the field. Over the past decade, MSK imaging research has been dominated by advancements in molecular imaging biomarkers, artificial intelligence, radiomics, and novel high-resolution equipment. Adequate preparation of trainees and specialists will ensure that current and future leaders will be prepared to embrace and critically appraise technological developments, will be up to date on clinical developments, such as the use of artificial tissues, will define research directions, and will actively participate and lead multidisciplinary research. This review presents an overview of the current MSK research landscape and proposes tangible future goals and strategic directions that will fortify the future of MSK radiology.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Apostolos H Karantanas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| |
Collapse
|
122
|
Li T, Wang Y, Qu Y, Dong R, Kang M, Zhao J. Feasibility study of hallux valgus measurement with a deep convolutional neural network based on landmark detection. Skeletal Radiol 2022; 51:1235-1247. [PMID: 34748073 DOI: 10.1007/s00256-021-03939-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 10/03/2021] [Accepted: 10/08/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop a deep learning algorithm based on automatic detection of landmarks that can be used to automatically calculate forefoot imaging parameters from radiographs and test its performance. MATERIALS AND METHODS A total of 1023 weight-bearing dorsoplantar (DP) radiographs were included. A total of 776 radiographs were used for training and verification of the model, and 247 radiographs were used for testing the performance of the model. The radiologists manually marked 18 landmarks on each image. By training our model to automatically label these landmarks, 4 imaging parameters commonly used for the diagnosis of hallux valgus could be measured, including the first-second intermetatarsal angle (IMA), hallux valgus angle (HVA), hallux interphalangeal angle (HIA), and distal metatarsal articular angle (DMAA). The reference standard was determined by the radiologists' measurements. The percentage of correct key points (PCK), intragroup correlation coefficient (ICC), Pearson correlation coefficient (r), root mean square error (RMSE), and mean absolute error (MAE) between the predicted value of the model and the reference standard were calculated. The Bland-Altman plot shows the mean difference and 95% LoA. RESULTS The PCK was 84-99% at the 3-mm threshold. The correlation between the observed and predicted values of the four angles was high (ICC: 0.89-0.96, r: 0.81-0.97, RMSE: 3.76-6.77, MAE: 3.22-5.52). However, there was a systematic error between the model predicted value and the reference standard (the mean difference ranged from - 3.00 to - 5.08°, and the standard deviation ranged from 2.25 to 4.47°). CONCLUSION Our model can accurately identify landmarks, but there is a certain amount of error in the angle measurement, which needs further improvement.
Collapse
Affiliation(s)
- Tong Li
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Yuzhao Wang
- College of Computer Science and Technology, Jilin University, Changchun, 130000, China
| | - Yang Qu
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Rongpeng Dong
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Mingyang Kang
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Jianwu Zhao
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China.
| |
Collapse
|
123
|
Goyal M, McDonough R. Eudaimonia and the Future Radiologist. Acad Radiol 2022; 29:909-913. [PMID: 34193370 DOI: 10.1016/j.acra.2021.05.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/28/2021] [Accepted: 05/29/2021] [Indexed: 11/01/2022]
|
124
|
A pediatric wrist trauma X-ray dataset (GRAZPEDWRI-DX) for machine learning. Sci Data 2022; 9:222. [PMID: 35595759 PMCID: PMC9122976 DOI: 10.1038/s41597-022-01328-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 04/19/2022] [Indexed: 01/06/2023] Open
Abstract
Digital radiography is widely available and the standard modality in trauma imaging, often enabling to diagnose pediatric wrist fractures. However, image interpretation requires time-consuming specialized training. Due to astonishing progress in computer vision algorithms, automated fracture detection has become a topic of research interest. This paper presents the GRAZPEDWRI-DX dataset containing annotated pediatric trauma wrist radiographs of 6,091 patients, treated at the Department for Pediatric Surgery of the University Hospital Graz between 2008 and 2018. A total number of 10,643 studies (20,327 images) are made available, typically covering posteroanterior and lateral projections. The dataset is annotated with 74,459 image tags and features 67,771 labeled objects. We de-identified all radiographs and converted the DICOM pixel data to 16-Bit grayscale PNG images. The filenames and the accompanying text files provide basic patient information (age, sex). Several pediatric radiologists annotated dataset images by placing lines, bounding boxes, or polygons to mark pathologies like fractures or periosteal reactions. They also tagged general image characteristics. This dataset is publicly available to encourage computer vision research. Measurement(s) | wrist fracture • pronator quadratus sign • AO classifiction • soft tissue swelling • metal implant • osteopenia • plaster cast • bone Lesion • subperiosteal bone formation | Technology Type(s) | bone radiography |
Collapse
|
125
|
Niiya A, Murakami K, Kobayashi R, Sekimoto A, Saeki M, Toyofuku K, Kato M, Shinjo H, Ito Y, Takei M, Murata C, Ohgiya Y. Development of an artificial intelligence-assisted computed tomography diagnosis technology for rib fracture and evaluation of its clinical usefulness. Sci Rep 2022; 12:8363. [PMID: 35589847 PMCID: PMC9119970 DOI: 10.1038/s41598-022-12453-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/03/2022] [Indexed: 11/20/2022] Open
Abstract
Artificial intelligence algorithms utilizing deep learning are helpful tools for diagnostic imaging. A deep learning-based automatic detection algorithm was developed for rib fractures on computed tomography (CT) images of high-energy trauma patients. In this study, the clinical effectiveness of this algorithm was evaluated. A total of 56 cases were retrospectively examined, including 46 rib fractures and 10 control cases from our hospital, between January and June 2019. Two radiologists annotated the fracture lesions (complete or incomplete) for each CT image, which is considered the “ground truth.” Thereafter, the algorithm’s diagnostic results for all cases were compared with the ground truth, and the sensitivity and number of false positive (FP) results per case were assessed. The radiologists identified 199 images with a fracture. The sensitivity of the algorithm was 89.8%, and the number of FPs per case was 2.5. After additional learning, the sensitivity increased to 93.5%, and the number of FPs was 1.9 per case. FP results were found in the trabecular bone with the appearance of fracture, vascular grooves, and artifacts. The sensitivity of the algorithm used in this study was sufficient to aid the rapid detection of rib fractures within the evaluated validation set of CT images.
Collapse
Affiliation(s)
- Akifumi Niiya
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan.
| | - Kouzou Murakami
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Rei Kobayashi
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Atsuhito Sekimoto
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Miho Saeki
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Kosuke Toyofuku
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Masako Kato
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Hidenori Shinjo
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Yoshinori Ito
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Mizuki Takei
- Fujifilm Corporation, Nishiazabu 2-Chome, Minato-ku, Tokyo, 26-30, Japan
| | - Chiori Murata
- Fujifilm Corporation, Nishiazabu 2-Chome, Minato-ku, Tokyo, 26-30, Japan
| | - Yoshimitsu Ohgiya
- Department of Radiology, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| |
Collapse
|
126
|
Ramlakhan S, Saatchi R, Sabir L, Singh Y, Hughes R, Shobayo O, Ventour D. Understanding and interpreting artificial intelligence, machine learning and deep learning in Emergency Medicine. Emerg Med J 2022; 39:380-385. [PMID: 35241440 DOI: 10.1136/emermed-2021-212068] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 01/29/2022] [Indexed: 02/06/2023]
Affiliation(s)
- Shammi Ramlakhan
- Emergency Department, Sheffield Children's Hospital, Sheffield, UK
| | - Reza Saatchi
- Electronics and Computer Engineering Research Institute, Sheffield Hallam University, Sheffield, UK
| | - Lisa Sabir
- Emergency Department, Sheffield Children's Hospital, Sheffield, UK
| | - Yardesh Singh
- Department of Clinical Surgical Sciences, Faculty of Medical Sciences, The University of the West Indies, St Augustine, Trinidad and Tobago
| | - Ruby Hughes
- Simulation and Modelling Unit, Advanced Forming Research Centre, University of Strathclyde, Sheffield, UK
| | - Olamilekan Shobayo
- Electronics and Computer Engineering Research Institute, Sheffield Hallam University, Sheffield, UK
| | - Dale Ventour
- Department of Clinical Surgical Sciences, Faculty of Medical Sciences, The University of the West Indies, St Augustine, Trinidad and Tobago
| |
Collapse
|
127
|
CCE-Net: A rib fracture diagnosis network based on contralateral, contextual, and edge enhanced modules. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
128
|
Chen Y, He K, Hao B, Weng Y, Chen Z. FractureNet: A 3D Convolutional Neural Network Based on the Architecture of m-Ary Tree for Fracture Type Identification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1196-1207. [PMID: 34890325 DOI: 10.1109/tmi.2021.3134650] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
To address the problem of automatic identification of fine-grained fracture types, in this paper, we propose a novel framework using 3D convolutional neural network (CNN) to learn fracture features from voxelized bone models which are obtained by establishing isomorphic mapping from fractured bones to a voxelized template. The network, which is named FractureNet, consists of four discriminators forming a multi-stage hierarchy. Each discriminator includes multiple sub-classifiers. These sub-classifiers are chained by two kinds of feature chains (feature map chain and classification feature chain) in the form of a full m-ary tree to perform multi-stage classification tasks. The features learned and classification results obtained at previous stages serve as prior knowledge for current learning and classification. All sub-classifiers are jointly learned in an end-to-end network via a multi-stage loss function integrating losses of the four discriminators. To make our FractureNet more robust and accurate, a data augmentation strategy termed r-combination with constraints is further proposed on the basis of an adjacency relation and a continuity relation between voxels to create a large-scale fracture dataset of voxel models. Extensive experiments show that the proposed method can recognize various fracture types in patients accurately and effectively, and enables significant improvements over the state-of-the-arts on a variety of fracture recognition tasks. Moreover, ancillary experiments on the CIFAR-10 and the PadChest datasets at large scales further support the superior performance of the proposed FractureNet.
Collapse
|
129
|
Makino T, Jastrzębski S, Oleszkiewicz W, Chacko C, Ehrenpreis R, Samreen N, Chhor C, Kim E, Lee J, Pysarenko K, Reig B, Toth H, Awal D, Du L, Kim A, Park J, Sodickson DK, Heacock L, Moy L, Cho K, Geras KJ. Differences between human and machine perception in medical diagnosis. Sci Rep 2022; 12:6877. [PMID: 35477730 PMCID: PMC9046399 DOI: 10.1038/s41598-022-10526-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 04/06/2022] [Indexed: 02/07/2023] Open
Abstract
Deep neural networks (DNNs) show promise in image-based medical diagnosis, but cannot be fully trusted since they can fail for reasons unrelated to underlying pathology. Humans are less likely to make such superficial mistakes, since they use features that are grounded on medical science. It is therefore important to know whether DNNs use different features than humans. Towards this end, we propose a framework for comparing human and machine perception in medical diagnosis. We frame the comparison in terms of perturbation robustness, and mitigate Simpson's paradox by performing a subgroup analysis. The framework is demonstrated with a case study in breast cancer screening, where we separately analyze microcalcifications and soft tissue lesions. While it is inconclusive whether humans and DNNs use different features to detect microcalcifications, we find that for soft tissue lesions, DNNs rely on high frequency components ignored by radiologists. Moreover, these features are located outside of the region of the images found most suspicious by radiologists. This difference between humans and machines was only visible through subgroup analysis, which highlights the importance of incorporating medical domain knowledge into the comparison.
Collapse
Affiliation(s)
- Taro Makino
- Center for Data Science, New York University, New York, NY, USA.
- Department of Radiology, NYU Langone Health, New York, NY, USA.
| | - Stanisław Jastrzębski
- Center for Data Science, New York University, New York, NY, USA
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Center for Advanced Imaging Innovation and Research, NYU Langone Health, New York, NY, USA
| | - Witold Oleszkiewicz
- Faculty of Electronics and Information Technology, Warsaw University of Technology, Warszawa, Poland
| | - Celin Chacko
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | | | - Naziya Samreen
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Chloe Chhor
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Eric Kim
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Jiyon Lee
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | | | - Beatriu Reig
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Perlmutter Cancer Center, NYU Langone Health, New York, NY, USA
| | - Hildegard Toth
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Perlmutter Cancer Center, NYU Langone Health, New York, NY, USA
| | - Divya Awal
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Linda Du
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Alice Kim
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - James Park
- Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Daniel K Sodickson
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Center for Advanced Imaging Innovation and Research, NYU Langone Health, New York, NY, USA
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, USA
- Perlmutter Cancer Center, NYU Langone Health, New York, NY, USA
| | - Laura Heacock
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Perlmutter Cancer Center, NYU Langone Health, New York, NY, USA
| | - Linda Moy
- Department of Radiology, NYU Langone Health, New York, NY, USA
- Center for Advanced Imaging Innovation and Research, NYU Langone Health, New York, NY, USA
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, USA
- Perlmutter Cancer Center, NYU Langone Health, New York, NY, USA
| | - Kyunghyun Cho
- Center for Data Science, New York University, New York, NY, USA
- Department of Computer Science, Courant Institute, New York University, New York, NY, USA
| | - Krzysztof J Geras
- Center for Data Science, New York University, New York, NY, USA.
- Department of Radiology, NYU Langone Health, New York, NY, USA.
- Center for Advanced Imaging Innovation and Research, NYU Langone Health, New York, NY, USA.
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
130
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
131
|
Vearrier L, Derse AR, Basford JB, Larkin GL, Moskop JC. Artificial Intelligence in Emergency Medicine: Benefits, Risks, and Recommendations. J Emerg Med 2022; 62:492-499. [PMID: 35164977 DOI: 10.1016/j.jemermed.2022.01.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/12/2021] [Accepted: 01/16/2022] [Indexed: 01/04/2023]
Abstract
BACKGROUND Artificial intelligence (AI) can be described as the use of computers to perform tasks that formerly required human cognition. The American Medical Association prefers the term 'augmented intelligence' over 'artificial intelligence' to emphasize the assistive role of computers in enhancing physician skills as opposed to replacing them. The integration of AI into emergency medicine, and clinical practice at large, has increased in recent years, and that trend is likely to continue. DISCUSSION AI has demonstrated substantial potential benefit for physicians and patients. These benefits are transforming the therapeutic relationship from the traditional physician-patient dyad into a triadic doctor-patient-machine relationship. New AI technologies, however, require careful vetting, legal standards, patient safeguards, and provider education. Emergency physicians (EPs) should recognize the limits and risks of AI as well as its potential benefits. CONCLUSIONS EPs must learn to partner with, not capitulate to, AI. AI has proven to be superior to, or on a par with, certain physician skills, such as interpreting radiographs and making diagnoses based on visual cues, such as skin cancer. AI can provide cognitive assistance, but EPs must interpret AI results within the clinical context of individual patients. They must also advocate for patient confidentiality, professional liability coverage, and the essential role of specialty-trained EPs.
Collapse
Affiliation(s)
- Laura Vearrier
- Department of Emergency Medicine, University of Mississippi Medical Center, Jackson, Mississippi
| | - Arthur R Derse
- Center for Bioethics, Medical Humanities, and Department of Emergency Medicine, Medical College of Wisconsin, Wauwatosa, Wisconsin
| | - Jesse B Basford
- Departments of Family and Emergency Medicine, Alabama College of Osteopathic Medicine, Dothan, Alabama
| | - Gregory Luke Larkin
- Department of Emergency Medicine, Northeast Ohio Medical University, Rootstown, Ohio
| | - John C Moskop
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina
| |
Collapse
|
132
|
Kuo RYL, Harrison C, Curran TA, Jones B, Freethy A, Cussons D, Stewart M, Collins GS, Furniss D. Artificial Intelligence in Fracture Detection: A Systematic Review and Meta-Analysis. Radiology 2022; 304:50-62. [PMID: 35348381 DOI: 10.1148/radiol.211785] [Citation(s) in RCA: 86] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Background Patients with fractures are a common emergency presentation and may be misdiagnosed at radiologic imaging. An increasing number of studies apply artificial intelligence (AI) techniques to fracture detection as an adjunct to clinician diagnosis. Purpose To perform a systematic review and meta-analysis comparing the diagnostic performance in fracture detection between AI and clinicians in peer-reviewed publications and the gray literature (ie, articles published on preprint repositories). Materials and Methods A search of multiple electronic databases between January 2018 and July 2020 (updated June 2021) was performed that included any primary research studies that developed and/or validated AI for the purposes of fracture detection at any imaging modality and excluded studies that evaluated image segmentation algorithms. Meta-analysis with a hierarchical model to calculate pooled sensitivity and specificity was used. Risk of bias was assessed by using a modified Prediction Model Study Risk of Bias Assessment Tool, or PROBAST, checklist. Results Included for analysis were 42 studies, with 115 contingency tables extracted from 32 studies (55 061 images). Thirty-seven studies identified fractures on radiographs and five studies identified fractures on CT images. For internal validation test sets, the pooled sensitivity was 92% (95% CI: 88, 93) for AI and 91% (95% CI: 85, 95) for clinicians, and the pooled specificity was 91% (95% CI: 88, 93) for AI and 92% (95% CI: 89, 92) for clinicians. For external validation test sets, the pooled sensitivity was 91% (95% CI: 84, 95) for AI and 94% (95% CI: 90, 96) for clinicians, and the pooled specificity was 91% (95% CI: 81, 95) for AI and 94% (95% CI: 91, 95) for clinicians. There were no statistically significant differences between clinician and AI performance. There were 22 of 42 (52%) studies that were judged to have high risk of bias. Meta-regression identified multiple sources of heterogeneity in the data, including risk of bias and fracture type. Conclusion Artificial intelligence (AI) and clinicians had comparable reported diagnostic performance in fracture detection, suggesting that AI technology holds promise as a diagnostic adjunct in future clinical practice. Clinical trial registration no. CRD42020186641 © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Cohen and McInnes in this issue.
Collapse
Affiliation(s)
- Rachel Y L Kuo
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Conrad Harrison
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Terry-Ann Curran
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Benjamin Jones
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Alexander Freethy
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - David Cussons
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Max Stewart
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Gary S Collins
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Dominic Furniss
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| |
Collapse
|
133
|
Kang Y, Ren Z, Zhang Y, Zhang A, Xu W, Zhang G, Dong Q. Deep Scale-Variant Network for Femur Trochanteric Fracture Classification with HP Loss. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1560438. [PMID: 35388324 PMCID: PMC8977323 DOI: 10.1155/2022/1560438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/22/2022] [Accepted: 02/17/2022] [Indexed: 11/18/2022]
Abstract
Achieving automatic classification of femur trochanteric fracture from the edge computing device is of great importance and value for remote diagnosis and treatment. Nevertheless, designing a highly accurate classification model on 31A1/31A2/31A3 fractures from the X-ray is still limited due to the failure of capturing the scale-variant and contextual information. As a result, this paper proposes a deep scale-variant (DSV) network with a hybrid and progressive (HP) loss function to aggregate more influential representations of the fracture regions. More specifically, the DSV network is based on the ResNet and integrated with the designed scale-variant (SV) layer and HP loss, where the SV layer aims to enhance the representation ability to extract the scale-variant features, and HP loss is intended to force the network to condense more contextual clues. Furthermore, to evaluate the effect of the proposed DSV network, we carry out a series of experiments on the real X-ray images for comparison and evaluation, and the experimental results demonstrate that the proposed DSV network could outperform other classification methods on this classification task.
Collapse
Affiliation(s)
- Yuxiang Kang
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Zhipeng Ren
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Yinguang Zhang
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Aiming Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Weizhe Xu
- School of Computer Science, The University of Manchester, M14 5ta, Manchester, UK
| | - Guokai Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Qiang Dong
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| |
Collapse
|
134
|
Ramkumar PN, Luu BC, Haeberle HS, Karnuta JM, Nwachukwu BU, Williams RJ. Sports Medicine and Artificial Intelligence: A Primer. Am J Sports Med 2022; 50:1166-1174. [PMID: 33900125 DOI: 10.1177/03635465211008648] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Artificial intelligence (AI) represents the fourth industrial revolution and the next frontier in medicine poised to transform the field of orthopaedics and sports medicine, though widespread understanding of the fundamental principles and adoption of applications remain nascent. Recent research efforts into implementation of AI in the field of orthopaedic surgery and sports medicine have demonstrated great promise in predicting athlete injury risk, interpreting advanced imaging, evaluating patient-reported outcomes, reporting value-based metrics, and augmenting the patient experience. Not unlike the recent emphasis thrust upon physicians to understand the business of medicine, the future practice of sports medicine specialists will require a fundamental working knowledge of the strengths, limitations, and applications of AI-based tools. With appreciation, caution, and experience applying AI to sports medicine, the potential to automate tasks and improve data-driven insights may be realized to fundamentally improve patient care. In this Current Concepts review, we discuss the definitions, strengths, limitations, and applications of AI from the current literature as it relates to orthopaedic sports medicine.
Collapse
Affiliation(s)
- Prem N Ramkumar
- Orthopaedic Machine Learning Laboratory, Cleveland Clinic, Cleveland, Ohio, USA
- Department of Orthopaedic Surgery, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Bryan C Luu
- Orthopaedic Machine Learning Laboratory, Cleveland Clinic, Cleveland, Ohio, USA
- Department of Orthopaedic Surgery, Baylor College of Medicine, Houston, Texas, USA
| | - Heather S Haeberle
- Orthopaedic Machine Learning Laboratory, Cleveland Clinic, Cleveland, Ohio, USA
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Jaret M Karnuta
- Orthopaedic Machine Learning Laboratory, Cleveland Clinic, Cleveland, Ohio, USA
| | - Benedict U Nwachukwu
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Riley J Williams
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| |
Collapse
|
135
|
Wang X, Xu Z, Tong Y, Xia L, Jie B, Ding P, Bai H, Zhang Y, He Y. Detection and classification of mandibular fracture on CT scan using deep convolutional neural network. Clin Oral Investig 2022; 26:4593-4601. [PMID: 35218428 DOI: 10.1007/s00784-022-04427-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/19/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVES This study aimed to evaluate the accuracy and reliability of convolutional neural networks (CNNs) for the detection and classification of mandibular fracture on spiral computed tomography (CT). MATERIALS AND METHODS Between January 2013 and July 2020, 686 patients with mandibular fractures who underwent CT scan were classified and annotated by three experienced maxillofacial surgeons serving as the ground truth. An algorithm including two convolutional neural networks (U-Net and ResNet) was trained, validated, and tested using 222, 56, and 408 CT scans, respectively. The diagnostic performance of the algorithm was compared with the ground truth and evaluated by DICE, accuracy, sensitivity, specificity, and area under the ROC curve (AUC). RESULTS One thousand five hundred six mandibular fractures in nine subregions of 686 patients were diagnosed. The DICE of mandible segmentation using U-Net was 0.943. The accuracies of nine subregions were all above 90%, with a mean AUC of 0.956. CONCLUSIONS CNNs showed comparable reliability and accuracy in detecting and classifying mandibular fractures on CT. CLINICAL RELEVANCE The algorithm for automatic detection and classification of mandibular fractures will help improve diagnostic efficiency and provide expertise to areas with lower medical levels.
Collapse
Affiliation(s)
- Xuebing Wang
- Department of Oral and Maxillofacial SurgeryNational Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital StomatologyNational Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, No 22 Zhongguancun South Road, Beijing, 100081, People's Republic of China
| | | | - Yanhang Tong
- Department of Oral and Maxillofacial SurgeryNational Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital StomatologyNational Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, No 22 Zhongguancun South Road, Beijing, 100081, People's Republic of China
| | - Long Xia
- Plastic Surgery Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Bimeng Jie
- Department of Oral and Maxillofacial SurgeryNational Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital StomatologyNational Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, No 22 Zhongguancun South Road, Beijing, 100081, People's Republic of China
| | | | | | - Yi Zhang
- Department of Oral and Maxillofacial SurgeryNational Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital StomatologyNational Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, No 22 Zhongguancun South Road, Beijing, 100081, People's Republic of China
| | - Yang He
- Department of Oral and Maxillofacial SurgeryNational Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital StomatologyNational Clinical Research Center for Oral Diseases, Peking University School and Hospital of Stomatology, No 22 Zhongguancun South Road, Beijing, 100081, People's Republic of China.
| |
Collapse
|
136
|
An Algorithm for Automatic Rib Fracture Recognition Combined with nnU-Net and DenseNet. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2022; 2022:5841451. [PMID: 35251210 PMCID: PMC8896936 DOI: 10.1155/2022/5841451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 01/31/2022] [Indexed: 11/29/2022]
Abstract
Rib fracture is the most common thoracic clinical trauma. Most patients have multiple different types of rib fracture regions, so accurate and rapid identification of all trauma regions is crucial for the treatment of rib fracture patients. In this study, a two-stage rib fracture recognition model based on nnU-Net is proposed. First, a deep learning segmentation model is trained to generate candidate rib fracture regions, and then, a deep learning classification model is trained in the second stage to classify the segmented local fracture regions according to the candidate fracture regions generated in the first stage to determine whether they are fractures or not. The results show that the two-stage deep learning model proposed in this study improves the accuracy of rib fracture recognition and reduces the false-positive and false-negative rates of rib fracture detection, which can better assist doctors in fracture region recognition.
Collapse
|
137
|
A Progressive and Cross-Domain Deep Transfer Learning Framework for Wrist Fracture Detection. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2022. [DOI: 10.2478/jaiscr-2022-0007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Abstract
There has been an amplified focus on and benefit from the adoption of artificial intelligence (AI) in medical imaging applications. However, deep learning approaches involve training with massive amounts of annotated data in order to guarantee generalization and achieve high accuracies. Gathering and annotating large sets of training images require expertise which is both expensive and time-consuming, especially in the medical field. Furthermore, in health care systems where mistakes can have catastrophic consequences, there is a general mistrust in the black-box aspect of AI models. In this work, we focus on improving the performance of medical imaging applications when limited data is available while focusing on the interpretability aspect of the proposed AI model. This is achieved by employing a novel transfer learning framework, progressive transfer learning, an automated annotation technique and a correlation analysis experiment on the learned representations.
Progressive transfer learning helps jump-start the training of deep neural networks while improving the performance by gradually transferring knowledge from two source tasks into the target task. It is empirically tested on the wrist fracture detection application by first training a general radiology network RadiNet and using its weights to initialize RadiNetwrist
, that is trained on wrist images to detect fractures. Experiments show that RadiNetwrist
achieves an accuracy of 87% and an AUC ROC of 94% as opposed to 83% and 92% when it is pre-trained on the ImageNet dataset.
This improvement in performance is investigated within an explainable AI framework. More concretely, the learned deep representations of RadiNetwrist
are compared to those learned by the baseline model by conducting a correlation analysis experiment. The results show that, when transfer learning is gradually applied, some features are learned earlier in the network. Moreover, the deep layers in the progressive transfer learning framework are shown to encode features that are not encountered when traditional transfer learning techniques are applied.
In addition to the empirical results, a clinical study is conducted and the performance of RadiNetwrist
is compared to that of an expert radiologist. We found that RadiNetwrist
exhibited similar performance to that of radiologists with more than 20 years of experience.
This motivates follow-up research to train on more data to feasibly surpass radiologists’ performance, and investigate the interpretability of AI models in the healthcare domain where the decision-making process needs to be credible and transparent.
Collapse
|
138
|
Liao Z, Liao K, Shen H, van Boxel MF, Prijs J, Jaarsma RL, Doornberg JN, Hengel AVD, Verjans JW. CNN Attention Guidance for Improved Orthopedics Radiographic Fracture Classification. IEEE J Biomed Health Inform 2022; 26:3139-3150. [PMID: 35192467 DOI: 10.1109/jbhi.2022.3152267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Convolutional neural networks (CNNs) have gained significant popularity in orthopedic imaging in recent years due to their ability to solve fracture classification problems. A common criticism of CNNs is their opaque learning and reasoning process, making it difficult to trust machine diagnosis and the subsequent adoption of such algorithms in clinical setting. This is especially true when the CNN is trained with limited amount of medical data, which is a common issue as curating sufficiently large amount of annotated medical image data is a long and costly process. While interest has been devoted to explaining CNN learnt knowledge by visualizing network attention, the utilization of the visualized attention to improve network learning has been rarely investigated. This paper explores the effectiveness of regularizing CNN network with human-provided attention guidance on where in the image the network should look for answering clues. On two orthopedics radiographic fracture classification datasets, through extensive experiments we demonstrate that explicit human-guided attention indeed can direct correct network attention and consequently significantly improve classification performance. The development code for the proposed attention guidance is publicly available on GitHub.
Collapse
|
139
|
Applications of artificial intelligence and machine learning for the hip and knee surgeon: current state and implications for the future. INTERNATIONAL ORTHOPAEDICS 2022; 46:937-944. [PMID: 35171335 DOI: 10.1007/s00264-022-05346-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 02/05/2022] [Indexed: 12/17/2022]
Abstract
BACKGROUND Artificial Intelligence (AI)/Machine Learning (ML) applications have been proven efficient to improve diagnosis, to stratify risk, and to predict outcomes in many respective medical specialties, including in orthopaedics. CHALLENGES AND DISCUSSION Regarding hip and knee reconstruction surgery, AI/ML have not made it yet to clinical practice. In this review, we present sound AI/ML applications in the field of hip and knee degenerative disease and reconstruction. From osteoarthritis (OA) diagnosis and prediction of its advancement, clinical decision-making, identification of hip and knee implants to prediction of clinical outcome and complications following a reconstruction procedure of these joints, we report how AI/ML systems could facilitate data-driven personalized care for our patients.
Collapse
|
140
|
Alzaid A, Wignall A, Dogramadzi S, Pandit H, Xie SQ. Automatic detection and classification of peri-prosthetic femur fracture. Int J Comput Assist Radiol Surg 2022; 17:649-660. [PMID: 35157227 PMCID: PMC8948116 DOI: 10.1007/s11548-021-02552-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/21/2021] [Indexed: 12/02/2022]
Abstract
Purpose Object classification and localization is a key task of computer-aided diagnosis (CAD) tool. Although there have been numerous generic deep learning (DL) models developed for CAD, there is no work in the literature to evaluate their effectiveness when utilized in diagnosing fractures in proximity of joint implants. In this work, we aim to assess the performance of existing classification systems on binary and multi-class problems (fracture types) using plain radiographs. In addition, we evaluated the performance of object detection systems using the one- and two-stage DL architectures. Methods A data set of 1272 X-ray images of Peri-prosthetic Femur Fracture PFF was collected. The fractures were annotated with bounding boxes and classified according to the Vancouver Classification System (type A, B, C) by two clinical specialists. Four classification models such as Densenet161, Resnet50, Inception, VGG and two object detection models such as Faster RCNN and RetinaNet were evaluated, and their performance compared. Six confusion matrix-based measures were reported to evaluate fracture classification. For localization of the fracture, Average Precision and localization accuracy were reported. Results The Resnet50 showed the best performance with \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$95\%$$\end{document}95% accuracy and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$94\%$$\end{document}94% F1-score in the binary classification: fracture/normal. In addition, the Resnet50 showed \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$90\%$$\end{document}90% accuracy in multi-classification (normal, Vancouver type A, B and C). Conclusions A large data set of PFF images and the annotations of fracture features by two independent assessments were created to implement a DL-based approach for detecting, classifying and localizing PFFs. It was shown that this approach could be a promising diagnostic tool of fractures in proximity of joint implants.
Collapse
Affiliation(s)
- Asma Alzaid
- School of Electrical and Electronic Engineering, University of Leeds, Leeds, LS2 9JT, UK.
| | | | - Sanja Dogramadzi
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK
| | - Hemant Pandit
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.,Leeds Institute of Rheumatic and Musculoskeletal Medicine, Leeds, UK
| | - Sheng Quan Xie
- School of Electrical and Electronic Engineering, University of Leeds, Leeds, LS2 9JT, UK. .,Collaborates with Institute of Rehabilitation Engineering, Binzhou Medical University, Yantai, China.
| |
Collapse
|
141
|
A Surgeon's Guide to Understanding Artificial Intelligence and Machine Learning Studies in Orthopaedic Surgery. Curr Rev Musculoskelet Med 2022; 15:121-132. [PMID: 35141847 DOI: 10.1007/s12178-022-09738-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/17/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE OF REVIEW In recent years, machine learning techniques have been increasingly utilized across medicine, impacting the practice and delivery of healthcare. The data-driven nature of orthopaedic surgery presents many targets for improvement through the use of artificial intelligence, which is reflected in the increasing number of publications in the medical literature. However, the unique methodologies utilized in AI studies can present a barrier to its widespread acceptance and use in orthopaedics. The purpose of our review is to provide a tool that can be used by practitioners to better understand and ultimately leverage AI studies. RECENT FINDINGS The increasing interest in machine learning across medicine is reflected in a greater utilization of AI in recent medical literature. The process of designing machine learning studies includes study design, model choice, data collection/handling, model development, training, testing, and interpretation. Recent studies leveraging ML in orthopaedics provide useful examples for future research endeavors. This manuscript intends to create a guide discussing the use of machine learning and artificial intelligence in orthopaedic surgery research. Our review outlines the process of creating a machine learning algorithm and discusses the different model types, utilizing examples from recent orthopaedic literature to illustrate the techniques involved.
Collapse
|
142
|
Thomas LB, Mastorides SM, Viswanadhan NA, Jakey CE, Borkowski AA. Artificial Intelligence: Review of Current and Future Applications in Medicine. Fed Pract 2022; 38:527-538. [PMID: 35136337 DOI: 10.12788/fp.0174] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Background The role of artificial intelligence (AI) in health care is expanding rapidly. Currently, there are at least 29 US Food and Drug Administration-approved AI health care devices that apply to numerous medical specialties and many more are in development. Observations With increasing expectations for all health care sectors to deliver timely, fiscally-responsible, high-quality health care, AI has potential utility in numerous areas, such as image analysis, improved workflow and efficiency, public health, and epidemiology, to aid in processing large volumes of patient and medical data. In this review, we describe basic terminology, principles, and general AI applications relating to health care. We then discuss current and future applications for a variety of medical specialties. Finally, we discuss the future potential of AI along with the potential risks and limitations of current AI technology. Conclusions AI can improve diagnostic accuracy, increase patient safety, assist with patient triage, monitor disease progression, and assist with treatment decisions.
Collapse
Affiliation(s)
- L Brannon Thomas
- James A. Haley Veterans' Hospital, Tampa, Florida.,University of South Florida, Morsani College of Medicine, Tampa
| | - Stephen M Mastorides
- James A. Haley Veterans' Hospital, Tampa, Florida.,University of South Florida, Morsani College of Medicine, Tampa
| | | | - Colleen E Jakey
- James A. Haley Veterans' Hospital, Tampa, Florida.,University of South Florida, Morsani College of Medicine, Tampa
| | - Andrew A Borkowski
- James A. Haley Veterans' Hospital, Tampa, Florida.,University of South Florida, Morsani College of Medicine, Tampa
| |
Collapse
|
143
|
Your mileage may vary: impact of data input method for a deep learning bone age app's predictions. Skeletal Radiol 2022; 51:423-429. [PMID: 34476558 DOI: 10.1007/s00256-021-03897-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE The purpose of this study was to evaluate agreement in predictions made by a bone age prediction application ("app") among three data input methods. METHODS The 16Bit Bone Age app is a browser-based deep learning application for predicting bone age on pediatric hand radiographs; recommended data input methods are direct image file upload or smartphone-capture of image. We collected 50 hand radiographs, split equally among 5 bone age groups. Three observers used the 16Bit Bone Age app to assess these images using 3 different data input methods: (1) direct image upload, (2) smartphone photo of image in radiology reading room, and (3) smartphone photo of image in a clinic. RESULTS Interobserver agreement was excellent for direct upload (ICC = 1.00) and for photos in reading room (ICC = 0.96) and good for photos in clinic (ICC = 0.82), respectively. Intraobserver agreement for the entire test set across the 3 data input methods was variable with ICCs of 0.95, 0.96, and 0.57 for the 3 observers, respectively. DISCUSSION Our findings indicate that different data input methods can result in discordant bone age predictions from the 16Bit Bone Age app. Further study is needed to determine the impact of data input methods, such as smartphone image capture, on deep learning app performance and accuracy.
Collapse
|
144
|
Ren M, Yi PH. Deep learning detection of subtle fractures using staged algorithms to mimic radiologist search pattern. Skeletal Radiol 2022; 51:345-353. [PMID: 33576861 DOI: 10.1007/s00256-021-03739-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/25/2021] [Accepted: 02/07/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop and evaluate a two-stage deep convolutional neural network system that mimics a radiologist's search pattern for detecting two small fractures: triquetral avulsion fractures and Segond fractures. MATERIALS AND METHODS We obtained 231 lateral wrist radiographs and 173 anteroposterior knee radiographs from the Stanford MURA and LERA datasets and the public domain to train and validate a two-stage deep convolutional neural network system: (1) object detectors that crop the dorsal triquetrum or lateral tibial condyle, trained on control images, followed by (2) classifiers for triquetral and Segond fractures, trained on a 1:1 case:control split. A second set of classifiers was trained on uncropped images for comparison. External test sets of 50 lateral wrist radiographs and 24 anteroposterior knee radiographs were used to evaluate generalizability. Gradient-class activation mapping was used to inspect image regions of greater importance in deciding the final classification. RESULTS The object detectors accurately cropped the regions of interest in all validation and test images. The two-stage system achieved cross-validated area under the receiver operating characteristic curve values of 0.959 and 0.989 on triquetral and Segond fractures, compared with 0.860 (p = 0.0086) and 0.909 (p = 0.0074), respectively, for a one-stage classifier. Two-stage cross-validation accuracies were 90.8% and 92.5% for triquetral and Segond fractures, respectively. CONCLUSION A two-stage pipeline increases accuracy in the detection of subtle fractures on radiographs compared with a one-stage classifier and generalized well to external test data. Focusing attention on specific image regions appears to improve detection of subtle findings that may otherwise be missed.
Collapse
Affiliation(s)
- Mark Ren
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA
| | - Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA. .,University of Maryland Intelligent Imaging Center, Department of Radiology, University of Maryland School of Medicine, MD, Baltimore, USA. .,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
145
|
Laur O, Wang B. Musculoskeletal trauma and artificial intelligence: current trends and projections. Skeletal Radiol 2022; 51:257-269. [PMID: 34089338 DOI: 10.1007/s00256-021-03824-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 05/13/2021] [Accepted: 05/18/2021] [Indexed: 02/02/2023]
Abstract
Musculoskeletal trauma accounts for a significant fraction of emergency department visits and patients seeking urgent care, with a high financial cost to society. Diagnostic imaging is indispensable in the workup and management of trauma patients. However, diagnostic imaging represents a complex multifaceted system, with many aspects of its workflow prone to inefficiencies or human error. Recent technological innovations in artificial intelligence and machine learning have shown promise to revolutionize our systems for providing medical care to patients. This review will provide a general overview of the current state of artificial intelligence and machine learning applications in different aspects of trauma imaging and provide a vision for how such applications could be leveraged to enhance our diagnostic imaging systems and optimize patient outcomes.
Collapse
Affiliation(s)
- Olga Laur
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA
| | - Benjamin Wang
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA.
| |
Collapse
|
146
|
AI MSK clinical applications: orthopedic implants. Skeletal Radiol 2022; 51:305-313. [PMID: 34350476 DOI: 10.1007/s00256-021-03879-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 07/15/2021] [Accepted: 07/22/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) and deep learning have multiple potential uses in aiding the musculoskeletal radiologist in the radiological evaluation of orthopedic implants. These include identification of implants, characterization of implants according to anatomic type, identification of specific implant models, and evaluation of implants for positioning and complications. In addition, natural language processing (NLP) can aid in the acquisition of clinical information from the medical record that can help with tasks like prepopulating radiology reports. Several proof-of-concept works have been published in the literature describing the application of deep learning toward these various tasks, with performance comparable to that of expert musculoskeletal radiologists. Although much work remains to bring these proof-of-concept algorithms into clinical deployment, AI has tremendous potential toward automating these tasks, thereby augmenting the musculoskeletal radiologist.
Collapse
|
147
|
Artificial Intelligence in Diagnostic Radiology: Where Do We Stand, Challenges, and Opportunities. J Comput Assist Tomogr 2022; 46:78-90. [PMID: 35027520 DOI: 10.1097/rct.0000000000001247] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT Artificial intelligence (AI) is the most revolutionizing development in the health care industry in the current decade, with diagnostic imaging having the greatest share in such development. Machine learning and deep learning (DL) are subclasses of AI that show breakthrough performance in image analysis. They have become the state of the art in the field of image classification and recognition. Machine learning deals with the extraction of the important characteristic features from images, whereas DL uses neural networks to solve such problems with better performance. In this review, we discuss the current applications of machine learning and DL in the field of diagnostic radiology.Deep learning applications can be divided into medical imaging analysis and applications beyond analysis. In the field of medical imaging analysis, deep convolutional neural networks are used for image classification, lesion detection, and segmentation. Also used are recurrent neural networks when extracting information from electronic medical records and to augment the use of convolutional neural networks in the field of image classification. Generative adversarial networks have been explicitly used in generating high-resolution computed tomography and magnetic resonance images and to map computed tomography images from the corresponding magnetic resonance imaging. Beyond image analysis, DL can be used for quality control, workflow organization, and reporting.In this article, we review the most current AI models used in medical imaging research, providing a brief explanation of the various models described in the literature within the past 5 years. Emphasis is placed on the various DL models, as they are the most state-of-art in imaging analysis.
Collapse
|
148
|
Seol YJ, Kim YJ, Kim YS, Cheon YW, Kim KG. A Study on 3D Deep Learning-Based Automatic Diagnosis of Nasal Fractures. SENSORS 2022; 22:s22020506. [PMID: 35062465 PMCID: PMC8780993 DOI: 10.3390/s22020506] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/03/2022] [Accepted: 01/04/2022] [Indexed: 12/11/2022]
Abstract
This paper reported a study on the 3-dimensional deep-learning-based automatic diagnosis of nasal fractures. (1) Background: The nasal bone is the most protuberant feature of the face; therefore, it is highly vulnerable to facial trauma and its fractures are known as the most common facial fractures worldwide. In addition, its adhesion causes rapid deformation, so a clear diagnosis is needed early after fracture onset. (2) Methods: The collected computed tomography images were reconstructed to isotropic voxel data including the whole region of the nasal bone, which are represented in a fixed cubic volume. The configured 3-dimensional input data were then automatically classified by the deep learning of residual neural networks (3D-ResNet34 and ResNet50) with the spatial context information using a single network, whose performance was evaluated by 5-fold cross-validation. (3) Results: The classification of nasal fractures with simple 3D-ResNet34 and ResNet50 networks achieved areas under the receiver operating characteristic curve of 94.5% and 93.4% for binary classification, respectively, both indicating unprecedented high performance in the task. (4) Conclusions: In this paper, it is presented the possibility of automatic nasal bone fracture diagnosis using a 3-dimensional Resnet-based single classification network and it will improve the diagnostic environment with future research.
Collapse
Affiliation(s)
- Yu Jin Seol
- Department of Biomedical Engineering, Gachon University, 191, Hambangmoe-ro, Yeonsu-gu, Incheon 21936, Korea;
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University College of Medicine, 38-13 Docjeom-ro 3 beon-gil, Namdong-gu, Incheon 21565, Korea;
| | - Yoon Sang Kim
- Department of Plastic and Reconstructive Surgery, Gachon University Gil Medical Center, College of Medicine, Incheon 21565, Korea;
| | - Young Woo Cheon
- Department of Plastic and Reconstructive Surgery, Gachon University Gil Medical Center, College of Medicine, Incheon 21565, Korea;
- Correspondence: (Y.W.C.); (K.G.K.)
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University College of Medicine, 38-13 Docjeom-ro 3 beon-gil, Namdong-gu, Incheon 21565, Korea;
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Seongnam-si 13120, Korea
- Correspondence: (Y.W.C.); (K.G.K.)
| |
Collapse
|
149
|
Jia Y, Wang H, Chen W, Wang Y, Yang B. An attention‐based cascade R‐CNN model for sternum fracture detection in X‐ray images. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2022. [DOI: 10.1049/cit2.12072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Affiliation(s)
- Yang Jia
- School of Computer Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Shaanxi Key Laboratory of Network Data Intelligent Processing Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Xi'an Key Laboratory of Big Data and Intelligent Computing Xi'an Shaanxi China
| | - Haijuan Wang
- School of Computer Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Shaanxi Key Laboratory of Network Data Intelligent Processing Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Xi'an Key Laboratory of Big Data and Intelligent Computing Xi'an Shaanxi China
| | - Weiguang Chen
- School of Computer Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Shaanxi Key Laboratory of Network Data Intelligent Processing Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Xi'an Key Laboratory of Big Data and Intelligent Computing Xi'an Shaanxi China
| | - Yagang Wang
- School of Computer Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Shaanxi Key Laboratory of Network Data Intelligent Processing Xi'an University of Posts and Telecommunications Xi'an Shaanxi China
- Xi'an Key Laboratory of Big Data and Intelligent Computing Xi'an Shaanxi China
| | - Bin Yang
- Department of Radiology Xi'an Honghui Hospital Xi'an China
| |
Collapse
|
150
|
Artificial Intelligence to Diagnose Tibial Plateau Fractures: An Intelligent Assistant for Orthopedic Physicians. Curr Med Sci 2022; 41:1158-1164. [PMID: 34971441 PMCID: PMC8718992 DOI: 10.1007/s11596-021-2501-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 11/18/2021] [Indexed: 01/03/2023]
Abstract
Objective To explore a new artificial intelligence (AI)-aided method to assist the clinical diagnosis of tibial plateau fractures (TPFs) and further measure its validity and feasibility. Methods A total of 542 X-rays of TPFs were collected as a reference database. An AI algorithm (RetinaNet) was trained to analyze and detect TPF on the X-rays. The ability of the AI algorithm was determined by indexes such as detection accuracy and time taken for analysis. The algorithm performance was also compared with orthopedic physicians. Results The AI algorithm showed a detection accuracy of 0.91 for the identification of TPF, which was similar to the performance of orthopedic physicians (0.92±0.03). The average time spent for analysis of the AI was 0.56 s, which was 16 times faster than human performance (8.44±3.26 s). Conclusion The AI algorithm is a valid and efficient method for the clinical diagnosis of TPF. It can be a useful assistant for orthopedic physicians, which largely promotes clinical workflow and further guarantees the health and security of patients.
Collapse
|