51
|
Abstract
We present an overview of current clinical musculoskeletal imaging applications for artificial intelligence, as well as potential future applications and techniques.
Collapse
|
52
|
Oliveira E Carmo L, van den Merkhof A, Olczak J, Gordon M, Jutte PC, Jaarsma RL, IJpma FFA, Doornberg JN, Prijs J. An increasing number of convolutional neural networks for fracture recognition and classification in orthopaedics : are these externally validated and ready for clinical application? Bone Jt Open 2021; 2:879-885. [PMID: 34669518 PMCID: PMC8558452 DOI: 10.1302/2633-1462.210.bjo-2021-0133] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Aims The number of convolutional neural networks (CNN) available for fracture detection and classification is rapidly increasing. External validation of a CNN on a temporally separate (separated by time) or geographically separate (separated by location) dataset is crucial to assess generalizability of the CNN before application to clinical practice in other institutions. We aimed to answer the following questions: are current CNNs for fracture recognition externally valid?; which methods are applied for external validation (EV)?; and, what are reported performances of the EV sets compared to the internal validation (IV) sets of these CNNs? Methods The PubMed and Embase databases were systematically searched from January 2010 to October 2020 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The type of EV, characteristics of the external dataset, and diagnostic performance characteristics on the IV and EV datasets were collected and compared. Quality assessment was conducted using a seven-item checklist based on a modified Methodologic Index for NOn-Randomized Studies instrument (MINORS). Results Out of 1,349 studies, 36 reported development of a CNN for fracture detection and/or classification. Of these, only four (11%) reported a form of EV. One study used temporal EV, one conducted both temporal and geographical EV, and two used geographical EV. When comparing the CNN’s performance on the IV set versus the EV set, the following were found: AUCs of 0.967 (IV) versus 0.975 (EV), 0.976 (IV) versus 0.985 to 0.992 (EV), 0.93 to 0.96 (IV) versus 0.80 to 0.89 (EV), and F1-scores of 0.856 to 0.863 (IV) versus 0.757 to 0.840 (EV). Conclusion The number of externally validated CNNs in orthopaedic trauma for fracture recognition is still scarce. This greatly limits the potential for transfer of these CNNs from the developing institute to another hospital to achieve similar diagnostic performance. We recommend the use of geographical EV and statements such as the Consolidated Standards of Reporting Trials–Artificial Intelligence (CONSORT-AI), the Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence (SPIRIT-AI) and the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis–Machine Learning (TRIPOD-ML) to critically appraise performance of CNNs and improve methodological rigor, quality of future models, and facilitate eventual implementation in clinical practice. Cite this article: Bone Jt Open 2021;2(10):879–885.
Collapse
Affiliation(s)
- Luisa Oliveira E Carmo
- Department of Orthopaedic Surgery, University Medical Centre, University of Groningen, Groningen, Groningen, Netherlands
| | - Anke van den Merkhof
- Department of Orthopaedic Surgery, Flinders Medical Centre, Bedford Park, Adelaide, South Australia, Australia.,Flinders University, Bedford Park, Adelaide, South Australia, Australia
| | - Jakub Olczak
- Institute of Clinical Sciences, Danderyd University Hospital, Karolinska Institute, Stockholm, Sweden
| | - Max Gordon
- Institute of Clinical Sciences, Danderyd University Hospital, Karolinska Institute, Stockholm, Sweden
| | - Paul C Jutte
- Department of Orthopaedic Surgery, University Medical Centre, University of Groningen, Groningen, Groningen, Netherlands
| | - Ruurd L Jaarsma
- Department of Orthopaedic Surgery, Flinders Medical Centre, Bedford Park, Adelaide, South Australia, Australia.,Flinders University, Bedford Park, Adelaide, South Australia, Australia
| | - Frank F A IJpma
- Department of Trauma Surgery, University Medical Centre Groningen, University of Groningen, Groningen, Groningen, Netherlands
| | - Job N Doornberg
- Department of Orthopaedic Surgery, University Medical Centre, University of Groningen, Groningen, Groningen, Netherlands.,Department of Orthopaedic Surgery, Flinders Medical Centre, Bedford Park, Adelaide, South Australia, Australia.,Flinders University, Bedford Park, Adelaide, South Australia, Australia.,Department of Trauma Surgery, University Medical Centre Groningen, University of Groningen, Groningen, Groningen, Netherlands
| | - Jasper Prijs
- Department of Orthopaedic Surgery, University Medical Centre, University of Groningen, Groningen, Groningen, Netherlands.,Department of Orthopaedic Surgery, Flinders Medical Centre, Bedford Park, Adelaide, South Australia, Australia.,Flinders University, Bedford Park, Adelaide, South Australia, Australia.,Department of Trauma Surgery, University Medical Centre Groningen, University of Groningen, Groningen, Groningen, Netherlands
| | -
- Machine Learning Consortium
| |
Collapse
|
53
|
Kitamura G. Hanging protocol optimization of lumbar spine radiographs with machine learning. Skeletal Radiol 2021; 50:1809-1819. [PMID: 33590305 PMCID: PMC8277694 DOI: 10.1007/s00256-021-03733-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/01/2021] [Accepted: 02/01/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVES The purpose of this study was to determine whether machine learning algorithms can be utilized to optimize the hanging protocol of lumbar spine radiographs. Specifically, we explored whether machine learning models can accurately label lumbar spine views/positions, detect hardware, and rotate the lateral views to straighten the image. METHODS We identified 1727 patients with 6988 lumbar spine radiographs. The view (anterior-posterior, right oblique, left oblique, left lateral, right lateral, left lumbosacral or right lumbosacral), hardware (present or not present), dynamic position (neutral, flexion, or extension), and correctional rotation of each radiograph were manually documented by a board-certified radiologist. Various output metrics were calculated, including area under the curve (AUC) for the categorical output models (view, hardware, and dynamic position). For non-binary categories, an all-versus-other technique was utilized designating one category as true and all others as false, allowing for a binary evaluation (e.g., AP vs. non-AP or extension vs. non-extension). For correctional rotation, the degree of rotation required to straighten the lateral spine radiograph was documented. The mean absolute difference was calculated between the ground truth and model-predicted value reported in degrees of rotation. Ensembles of the rotation models were created. We evaluated the rotation models on 3 test dataset splits: only 0 rotation, only non-0 rotation, and all cases. RESULTS The AUC values for the categorical models ranged from 0.985 to 1.000. For the only 0 rotation data, the ensemble combining the absolute minimum value between the 20- and 60-degree models performed best (mean absolute difference of 0.610). For the non-0 rotation data, the ensemble merging the absolute maximum value between the 40- and 160-degree models performed best (mean absolute difference of 4.801). For the all cases split, the ensemble combining the minimum value of the 20- and 40-degree models performed best (mean absolute difference of 3.083). CONCLUSION Machine learning techniques can be successfully implemented to optimize lumbar spine x-ray hanging protocols by accounting for views, hardware, dynamic position, and rotation correction.
Collapse
Affiliation(s)
- Gene Kitamura
- UPMC Department of Radiology, University of Pittsburgh Medical Center (UPMC) and University of Pittsburgh, 200 Lothrop St., Pittsburgh, PA 15213, USA
| |
Collapse
|
54
|
Ananda A, Ngan KH, Karabağ C, Ter-Sarkisov A, Alonso E, Reyes-Aldasoro CC. Classification and Visualisation of Normal and Abnormal Radiographs; A Comparison between Eleven Convolutional Neural Network Architectures. SENSORS (BASEL, SWITZERLAND) 2021; 21:5381. [PMID: 34450821 PMCID: PMC8400172 DOI: 10.3390/s21165381] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/20/2021] [Accepted: 08/01/2021] [Indexed: 02/03/2023]
Abstract
This paper investigates the classification of radiographic images with eleven convolutional neural network (CNN) architectures (GoogleNet, VGG-19, AlexNet, SqueezeNet, ResNet-18, Inception-v3, ResNet-50, VGG-16, ResNet-101, DenseNet-201 and Inception-ResNet-v2). The CNNs were used to classify a series of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset into two classes-normal and abnormal. The architectures were compared for different hyper-parameters against accuracy and Cohen's kappa coefficient. The best two results were then explored with data augmentation. Without the use of augmentation, the best results were provided by Inception-ResNet-v2 (Mean accuracy = 0.723, Mean kappa = 0.506). These were significantly improved with augmentation to Inception-ResNet-v2 (Mean accuracy = 0.857, Mean kappa = 0.703). Finally, Class Activation Mapping was applied to interpret activation of the network against the location of an anomaly in the radiographs.
Collapse
Affiliation(s)
- Ananda Ananda
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Kwun Ho Ngan
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Cefa Karabağ
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Aram Ter-Sarkisov
- CitAI Research Centre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (A.T.-S.); (E.A.)
| | - Eduardo Alonso
- CitAI Research Centre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (A.T.-S.); (E.A.)
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| |
Collapse
|
55
|
Ichikawa S, Hamada M, Sugimori H. A deep-learning method using computed tomography scout images for estimating patient body weight. Sci Rep 2021; 11:15627. [PMID: 34341462 PMCID: PMC8329066 DOI: 10.1038/s41598-021-95170-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Accepted: 07/22/2021] [Indexed: 11/24/2022] Open
Abstract
Body weight is an indispensable parameter for determination of contrast medium dose, appropriate drug dosing, or management of radiation dose. However, we cannot always determine the accurate patient body weight at the time of computed tomography (CT) scanning, especially in emergency care. Time-efficient methods to estimate body weight with high accuracy before diagnostic CT scans currently do not exist. In this study, on the basis of 1831 chest and 519 abdominal CT scout images with the corresponding body weights, we developed and evaluated deep-learning models capable of automatically predicting body weight from CT scout images. In the model performance assessment, there were strong correlations between the actual and predicted body weights in both chest (ρ = 0.947, p < 0.001) and abdominal datasets (ρ = 0.869, p < 0.001). The mean absolute errors were 2.75 kg and 4.77 kg for the chest and abdominal datasets, respectively. Our proposed method with deep learning is useful for estimating body weights from CT scout images with clinically acceptable accuracy and potentially could be useful for determining the contrast medium dose and CT dose management in adult patients with unknown body weight.
Collapse
Affiliation(s)
- Shota Ichikawa
- Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Misaki Hamada
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.
| |
Collapse
|
56
|
Abstract
Die Radiologie ist von stetem Wandel geprägt und definiert sich über den technologischen Fortschritt. Künstliche Intelligenz (KI) wird die praktische Tätigkeit in der Kinder- und Jugendradiologie künftig in allen Belangen verändern. Bildakquisition, Befunderkennung und -segmentierung sowie die Erkennung von Gewebeeigenschaften und deren Kombination mit Big Data werden die Haupteinsatzgebiete in der Radiologie sein. Höhere Effektivität, Beschleunigung von Untersuchung und Befundung sowie Kosteneinsparung sind mit der Anwendung von KI verbundene Erwartungshaltungen. Ein verbessertes Patientenmanagement, Arbeitserleichterungen für medizinisch-technische Radiologieassistenten und Kinder- und Jugendradiologen sowie schnellere Untersuchungs- und Befundzeiten markieren die Meilensteine der KI-Entwicklung in der Radiologie. Von der Terminkommunikation und Gerätesteuerung bis zu Therapieempfehlung und -monitoring wird der Alltag durch Elemente der KI verändert. Kinder- und Jugendradiologen müssen daher grundlegend über KI informiert sein und mit Datenwissenschaftlern bei der Etablierung und Anwendung von KI-Elementen zusammenarbeiten.
Collapse
|
57
|
Kim MW, Jung J, Park SJ, Park YS, Yi JH, Yang WS, Kim JH, Cho BJ, Ha SO. Application of convolutional neural networks for distal radio-ulnar fracture detection on plain radiographs in the emergency room. Clin Exp Emerg Med 2021; 8:120-127. [PMID: 34237817 PMCID: PMC8273672 DOI: 10.15441/ceem.20.091] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 09/24/2020] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE Recent studies have suggested that deep-learning models can satisfactorily assist in fracture diagnosis. We aimed to evaluate the performance of two of such models in wrist fracture detection. METHODS We collected image data of patients who visited with wrist trauma at the emergency department. A dataset extracted from January 2018 to May 2020 was split into training (90%) and test (10%) datasets, and two types of convolutional neural networks (i.e., DenseNet-161 and ResNet-152) were trained to detect wrist fractures. Gradient-weighted class activation mapping was used to highlight the regions of radiograph scans that contributed to the decision of the model. Performance of the convolutional neural network models was evaluated using the area under the receiver operating characteristic curve. RESULTS For model training, we used 4,551 radiographs from 798 patients and 4,443 radiographs from 1,481 patients with and without fractures, respectively. The remaining 10% (300 radiographs from 100 patients with fractures and 690 radiographs from 230 patients without fractures) was used as a test dataset. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of DenseNet-161 and ResNet-152 in the test dataset were 90.3%, 90.3%, 80.3%, 95.6%, and 90.3% and 88.6%, 88.4%, 76.9%, 94.7%, and 88.5%, respectively. The area under the receiver operating characteristic curves of DenseNet-161 and ResNet-152 for wrist fracture detection were 0.962 and 0.947, respectively. CONCLUSION We demonstrated that DenseNet-161 and ResNet-152 models could help detect wrist fractures in the emergency room with satisfactory performance.
Collapse
Affiliation(s)
- Min Woong Kim
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jaewon Jung
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Se Jin Park
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Young Sun Park
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jeong Hyeon Yi
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Won Seok Yang
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jin Hyuck Kim
- Department of Neurology, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea.,Department of Ophthalmology, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Sang Ook Ha
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| |
Collapse
|
58
|
Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Panoptic Segmentation on Panoramic Radiographs: Deep Learning-Based Segmentation of Various Structures Including Maxillary Sinus and Mandibular Canal. J Clin Med 2021; 10:2577. [PMID: 34208024 PMCID: PMC8230590 DOI: 10.3390/jcm10122577] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/03/2021] [Accepted: 06/09/2021] [Indexed: 11/29/2022] Open
Abstract
Panoramic radiographs, also known as orthopantomograms, are routinely used in most dental clinics. However, it has been difficult to develop an automated method that detects the various structures present in these radiographs. One of the main reasons for this is that structures of various sizes and shapes are collectively shown in the image. In order to solve this problem, the recently proposed concept of panoptic segmentation, which integrates instance segmentation and semantic segmentation, was applied to panoramic radiographs. A state-of-the-art deep neural network model designed for panoptic segmentation was trained to segment the maxillary sinus, maxilla, mandible, mandibular canal, normal teeth, treated teeth, and dental implants on panoramic radiographs. Unlike conventional semantic segmentation, each object in the tooth and implant classes was individually classified. For evaluation, the panoptic quality, segmentation quality, recognition quality, intersection over union (IoU), and instance-level IoU were calculated. The evaluation and visualization results showed that the deep learning-based artificial intelligence model can perform panoptic segmentation of images, including those of the maxillary sinus and mandibular canal, on panoramic radiographs. This automatic machine learning method might assist dental practitioners to set up treatment plans and diagnose oral and maxillofacial diseases.
Collapse
Affiliation(s)
- Jun-Young Cha
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - In-Sung Yeo
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| |
Collapse
|
59
|
Ukai K, Rahman R, Yagi N, Hayashi K, Maruo A, Muratsu H, Kobashi S. Detecting pelvic fracture on 3D-CT using deep convolutional neural networks with multi-orientated slab images. Sci Rep 2021; 11:11716. [PMID: 34083655 PMCID: PMC8175387 DOI: 10.1038/s41598-021-91144-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 05/19/2021] [Indexed: 11/29/2022] Open
Abstract
Pelvic fracture is one of the leading causes of death in the elderly, carrying a high risk of death within 1 year of fracture. This study proposes an automated method to detect pelvic fractures on 3-dimensional computed tomography (3D-CT). Deep convolutional neural networks (DCNNs) have been used for lesion detection on 2D and 3D medical images. However, training a DCNN directly using 3D images is complicated, computationally costly, and requires large amounts of training data. We propose a method that evaluates multiple, 2D, real-time object detection systems (YOLOv3 models) in parallel, in which each YOLOv3 model is trained using differently orientated 2D slab images reconstructed from 3D-CT. We assume that an appropriate reconstruction orientation would exist to optimally characterize image features of bone fractures on 3D-CT. Multiple YOLOv3 models in parallel detect 2D fracture candidates in different orientations simultaneously. The 3D fracture region is then obtained by integrating the 2D fracture candidates. The proposed method was validated in 93 subjects with bone fractures. Area under the curve (AUC) was 0.824, with 0.805 recall and 0.907 precision. The AUC with a single orientation was 0.652. This method was then applied to 112 subjects without bone fractures to evaluate over-detection. The proposed method successfully detected no bone fractures in all except 4 non-fracture subjects (96.4%).
Collapse
Affiliation(s)
- Kazutoshi Ukai
- Research and Development Center, GLORY Ltd, Himeji, Japan. .,Graduate School of Engineering, University of Hyogo, Himeji, Japan.
| | - Rashedur Rahman
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| | - Naomi Yagi
- Graduate School of Engineering, University of Hyogo, Himeji, Japan.,Himeji Dokkyo University, Himeji, Japan
| | | | | | | | - Syoji Kobashi
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| |
Collapse
|
60
|
Liu X, Gao K, Liu B, Pan C, Liang K, Yan L, Ma J, He F, Zhang S, Pan S, Yu Y. Advances in Deep Learning-Based Medical Image Analysis. HEALTH DATA SCIENCE 2021; 2021:8786793. [PMID: 38487506 PMCID: PMC10880179 DOI: 10.34133/2021/8786793] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/04/2021] [Indexed: 03/17/2024]
Abstract
Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions.Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors.Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.
Collapse
Affiliation(s)
| | | | - Bo Liu
- DeepWise AI Lab, BeijingChina
| | | | | | | | | | | | | | - Siyuan Pan
- Shanghai Jiaotong University, Shanghai, China
| | - Yizhou Yu
- DeepWise AI Lab, BeijingChina
- The University of Hong Kong, Hong Kong
| |
Collapse
|
61
|
Artificial intelligence research within reach: an object detection model to identify rickets on pediatric wrist radiographs. Pediatr Radiol 2021; 51:782-791. [PMID: 33399980 DOI: 10.1007/s00247-020-04895-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/02/2020] [Accepted: 10/19/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Artificial intelligence models have been successful in analyzing ordinary photographic images. One type of artificial intelligence model is object detection, where a labeled bounding box is drawn around an area of interest. Object detection can be applied to medical imaging tasks. OBJECTIVE To demonstrate object detection in identifying rickets and normal wrists on pediatric wrist radiographs using a small dataset, simple software and modest computer hardware. MATERIALS AND METHODS The institutional review board at Children's Healthcare of Atlanta approved this study. The radiology information system was searched for radiographic examinations of the wrist for the evaluation of rickets from 2007 to 2018 in children younger than 7 years of age. Inclusion criteria were an exam type of "Rickets Survey" or "Joint Survey 1 View" with reports containing the words "rickets" or "rachitic." Exclusion criteria were reports containing the words "renal," "kidney" or "transplant." Two pediatric radiologists reviewed the images and categorized them as either rickets or normal. Images were annotated by drawing a labeled bounding box around the distal radial and ulnar metaphases. The training dataset was created from images acquired from Jan. 1, 2007, to Dec. 31, 2017. This included 104 wrists with rickets and 264 normal wrists. This training dataset was used to create the object detection model. The testing dataset consisted of images acquired during the 2018 calendar year. This included 20 wrists with rickets and 37 normal wrists. Model sensitivity, specificity and accuracy were measured. RESULTS Of the 20 wrists with rickets in the testing set, 16 were correctly identified as rickets, 2 were incorrectly identified as normal and 2 had no prediction. Of the 37 normal wrists, 33 were correctly identified as normal, 2 were incorrectly identified as rickets and 2 had no prediction. This yielded a sensitivity and specificity of 80% and 95% for wrists with rickets and 89% and 90% for normal wrists. Overall model accuracy was 86%. CONCLUSION Object detection can identify rickets on pediatric wrist radiographs. Object detection models can be developed with a small dataset, simple software tools and modest computing power.
Collapse
|
62
|
Tobler P, Cyriac J, Kovacs BK, Hofmann V, Sexauer R, Paciolla F, Stieltjes B, Amsler F, Hirschmann A. AI-based detection and classification of distal radius fractures using low-effort data labeling: evaluation of applicability and effect of training set size. Eur Radiol 2021; 31:6816-6824. [PMID: 33742228 PMCID: PMC8379111 DOI: 10.1007/s00330-021-07811-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 02/18/2021] [Indexed: 12/12/2022]
Abstract
Objectives To evaluate the performance of a deep convolutional neural network (DCNN) in detecting and classifying distal radius fractures, metal, and cast on radiographs using labels based on radiology reports. The secondary aim was to evaluate the effect of the training set size on the algorithm’s performance. Methods A total of 15,775 frontal and lateral radiographs, corresponding radiology reports, and a ResNet18 DCNN were used. Fracture detection and classification models were developed per view and merged. Incrementally sized subsets served to evaluate effects of the training set size. Two musculoskeletal radiologists set the standard of reference on radiographs (test set A). A subset (B) was rated by three radiology residents. For a per-study-based comparison with the radiology residents, the results of the best models were merged. Statistics used were ROC and AUC, Youden’s J statistic (J), and Spearman’s correlation coefficient (ρ). Results The models’ AUC/J on (A) for metal and cast were 0.99/0.98 and 1.0/1.0. The models’ and residents’ AUC/J on (B) were similar on fracture (0.98/0.91; 0.98/0.92) and multiple fragments (0.85/0.58; 0.91/0.70). Training set size and AUC correlated on metal (ρ = 0.740), cast (ρ = 0.722), fracture (frontal ρ = 0.947, lateral ρ = 0.946), multiple fragments (frontal ρ = 0.856), and fragment displacement (frontal ρ = 0.595). Conclusions The models trained on a DCNN with report-based labels to detect distal radius fractures on radiographs are suitable to aid as a secondary reading tool; models for fracture classification are not ready for clinical use. Bigger training sets lead to better models in all categories except joint affection. Key Points • Detection of metal and cast on radiographs is excellent using AI and labels extracted from radiology reports. • Automatic detection of distal radius fractures on radiographs is feasible and the performance approximates radiology residents. • Automatic classification of the type of distal radius fracture varies in accuracy and is inferior for joint involvement and fragment displacement.
Collapse
Affiliation(s)
- Patrick Tobler
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Joshy Cyriac
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Balazs K Kovacs
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Verena Hofmann
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Raphael Sexauer
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Fabiano Paciolla
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Bram Stieltjes
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Felix Amsler
- Amsler Consulting Basel, Gundeldingerrain 111, 4059, Basel, Switzerland
| | - Anna Hirschmann
- University Hospital Basel, University of Basel, Clinic of Radiology and Nuclear Medicine, University of Basel, Petersgraben 4, 4031, Basel, Switzerland.
| |
Collapse
|
63
|
Raisuddin AM, Vaattovaara E, Nevalainen M, Nikki M, Järvenpää E, Makkonen K, Pinola P, Palsio T, Niemensivu A, Tervonen O, Tiulpin A. Critical evaluation of deep neural networks for wrist fracture detection. Sci Rep 2021; 11:6006. [PMID: 33727668 PMCID: PMC7971048 DOI: 10.1038/s41598-021-85570-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 03/01/2021] [Indexed: 11/08/2022] Open
Abstract
Wrist Fracture is the most common type of fracture with a high incidence rate. Conventional radiography (i.e. X-ray imaging) is used for wrist fracture detection routinely, but occasionally fracture delineation poses issues and an additional confirmation by computed tomography (CT) is needed for diagnosis. Recent advances in the field of Deep Learning (DL), a subfield of Artificial Intelligence (AI), have shown that wrist fracture detection can be automated using Convolutional Neural Networks. However, previous studies did not pay close attention to the difficult cases which can only be confirmed via CT imaging. In this study, we have developed and analyzed a state-of-the-art DL-based pipeline for wrist (distal radius) fracture detection-DeepWrist, and evaluated it against one general population test set, and one challenging test set comprising only cases requiring confirmation by CT. Our results reveal that a typical state-of-the-art approach, such as DeepWrist, while having a near-perfect performance on the general independent test set, has a substantially lower performance on the challenging test set-average precision of 0.99 (0.99-0.99) versus 0.64 (0.46-0.83), respectively. Similarly, the area under the ROC curve was of 0.99 (0.98-0.99) versus 0.84 (0.72-0.93), respectively. Our findings highlight the importance of a meticulous analysis of DL-based models before clinical use, and unearth the need for more challenging settings for testing medical AI systems.
Collapse
Affiliation(s)
| | - Elias Vaattovaara
- University of Oulu, Oulu, Finland
- Oulu University Hospital, Oulu, Finland
| | - Mika Nevalainen
- University of Oulu, Oulu, Finland
- Oulu University Hospital, Oulu, Finland
| | | | | | | | - Pekka Pinola
- University of Oulu, Oulu, Finland
- Oulu University Hospital, Oulu, Finland
| | - Tuula Palsio
- University of Oulu, Oulu, Finland
- City of Oulu, Oulu, Finland
| | | | - Osmo Tervonen
- University of Oulu, Oulu, Finland
- Oulu University Hospital, Oulu, Finland
| | - Aleksei Tiulpin
- University of Oulu, Oulu, Finland
- Oulu University Hospital, Oulu, Finland
- Ailean Technologies Oy, Oulu, Finland
| |
Collapse
|
64
|
Larentzakis A, Lygeros N. Artificial intelligence (AI) in medicine as a strategic valuable tool. Pan Afr Med J 2021; 38:184. [PMID: 33995790 PMCID: PMC8106796 DOI: 10.11604/pamj.2021.38.184.28197] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 02/05/2021] [Indexed: 12/25/2022] Open
Abstract
Humans' creativity led to machines that outperform human capabilities in terms of workload, effectiveness, precision, endurance, strength, and repetitiveness. It has always been a vision and a way to transcend the existence and to give more sense to life, which is precious. The common denominator of all these creations was that they were meant to replace, enhance or go beyond the mechanical capabilities of the human body. The story takes another bifurcation when Alan Turing introduced the concept of a machine that could think, in 1950. Artificial intelligence, presented as a term in 1956, describes the use of computers to imitate intelligence and critical thinking comparable to humans. However, the revolution began in 1943, when artificial neural networks was an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. Artificial intelligence is becoming a research focus and a tool of strategic value. The same observations apply in the field of healthcare, too. In this manuscript, we try to address key questions regarding artificial intelligence in medicine, such as what artificial intelligence is and how it works, what is its value in terms of application in medicine, and what are the prospects?
Collapse
Affiliation(s)
- Andreas Larentzakis
- First Department of Propaedeutic Surgery, Athens Medical School, National and Kapodistrian University of Athens, Hippocration General Athens Hospital, Athens, Greece
| | - Nik Lygeros
- Laboratoire de Génie des Procédés Catalytiques, Centre National de la Recherche Scientifique/École Supérieure de Chimie Physique Électronique, Lyon, France
| |
Collapse
|
65
|
Suojärvi N, Lindfors N, Höglund T, Sippo R, Waris E. Radiographic measurements of the normal distal radius: reliability of computer-aided CT versus physicians' radiograph interpretation. J Hand Surg Eur Vol 2021; 46:176-183. [PMID: 33148107 DOI: 10.1177/1753193420968399] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
We examined the reliability of a computer-aided cone-beam CT analysis of radiographic parameters of 50 normal distal radii and compared it with interobserver agreement of measurements made by three groups of physicians on two-dimensional plain radiographs. The intra-rater reliability of the computer-aided analysis was evaluated on 33 wrists imaged twice by cone-beam CT. The longitudinal axis, anterior tilt, radial inclination and ulnar variance were measured. The reliability of computer-aided analysis was excellent (intraclass correlation coefficient (ICC) 0.94-0.96) while the interobserver agreement of two-dimensional radiograph interpretation was good (ulnar variance, ICC 0.80-0.84) to poor (anterior tilt and radial inclination, ICC 0.20-0.42). We conclude that computer-aided cone-beam CT analysis was a reliable tool for radiographic parameter determination, whereas physicians demonstrated substantial variability especially in interpreting the angular parameters.
Collapse
Affiliation(s)
- Nora Suojärvi
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Nina Lindfors
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | | | - Robert Sippo
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Eero Waris
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| |
Collapse
|
66
|
Dreizin D, Goldmann F, LeBedis C, Boscak A, Dattwyler M, Bodanapally U, Li G, Anderson S, Maier A, Unberath M. An Automated Deep Learning Method for Tile AO/OTA Pelvic Fracture Severity Grading from Trauma whole-Body CT. J Digit Imaging 2021; 34:53-65. [PMID: 33479859 PMCID: PMC7886919 DOI: 10.1007/s10278-020-00399-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 10/14/2020] [Accepted: 11/10/2020] [Indexed: 01/13/2023] Open
Abstract
Admission trauma whole-body CT is routinely employed as a first-line diagnostic tool for characterizing pelvic fracture severity. Tile AO/OTA grade based on the presence or absence of rotational and translational instability corresponds with need for interventions including massive transfusion and angioembolization. An automated method could be highly beneficial for point of care triage in this critical time-sensitive setting. A dataset of 373 trauma whole-body CTs collected from two busy level 1 trauma centers with consensus Tile AO/OTA grading by three trauma radiologists was used to train and test a triplanar parallel concatenated network incorporating orthogonal full-thickness multiplanar reformat (MPR) views as input with a ResNeXt-50 backbone. Input pelvic images were first derived using an automated registration and cropping technique. Performance of the network for classification of rotational and translational instability was compared with that of (1) an analogous triplanar architecture incorporating an LSTM RNN network, (2) a previously described 3D autoencoder-based method, and (3) grading by a fourth independent blinded radiologist with trauma expertise. Confusion matrix results were derived, anchored to peak Matthews correlation coefficient (MCC). Associations with clinical outcomes were determined using Fisher's exact test. The triplanar parallel concatenated method had the highest accuracies for discriminating translational and rotational instability (85% and 74%, respectively), with specificity, recall, and F1 score of 93.4%, 56.5%, and 0.63 for translational instability and 71.7%, 75.7%, and 0.77 for rotational instability. Accuracy of this method was equivalent to the single radiologist read for rotational instability (74.0% versus 76.7%, p = 0.40), but significantly higher for translational instability (85.0% versus 75.1, p = 0.0007). Mean inference time was < 0.1 s per test image. Translational instability determined with this method was associated with need for angioembolization and massive transfusion (p = 0.002-0.008). Saliency maps demonstrated that the network focused on the sacroiliac complex and pubic symphysis, in keeping with the AO/OTA grading paradigm. A multiview concatenated deep network leveraging 3D information from orthogonal thick-MPR images predicted rotationally and translationally unstable pelvic fractures with accuracy comparable to an independent reader with trauma radiology expertise. Model output demonstrated significant association with key clinical outcomes.
Collapse
Affiliation(s)
- David Dreizin
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD USA
| | | | - Christina LeBedis
- Department of Radiology, Boston Medical Center, Boston University School of Medicine, Baltimore, MD USA
| | - Alexis Boscak
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD USA
| | - Matthew Dattwyler
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD USA
| | - Uttam Bodanapally
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD USA
| | - Guang Li
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD USA
| | - Stephan Anderson
- Department of Radiology, Boston Medical Center, Boston University School of Medicine, Baltimore, MD USA
| | - Andreas Maier
- Friedrich-Alexander University, Schloßplatz, Erlangen Germany
| | - Mathias Unberath
- Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
67
|
Wu HZ, Yan LF, Liu XQ, Yu YZ, Geng ZJ, Wu WJ, Han CQ, Guo YQ, Gao BL. The Feature Ambiguity Mitigate Operator model helps improve bone fracture detection on X-ray radiograph. Sci Rep 2021; 11:1589. [PMID: 33452403 PMCID: PMC7810849 DOI: 10.1038/s41598-021-81236-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/04/2021] [Indexed: 11/11/2022] Open
Abstract
This study was performed to propose a method, the Feature Ambiguity Mitigate Operator (FAMO) model, to mitigate feature ambiguity in bone fracture detection on radiographs of various body parts. A total of 9040 radiographic studies were extracted. These images were classified into several body part types including 1651 hand, 1302 wrist, 406 elbow, 696 shoulder, 1580 pelvic, 948 knee, 1180 ankle, and 1277 foot images. Instance segmentation was annotated by radiologists. The ResNext-101+FPN was employed as the baseline network structure and the FAMO model for processing. The proposed FAMO model and other ablative models were tested on a test set of 20% total radiographs in a balanced body part distribution. To the per-fracture extent, an AP (average precision) analysis was performed. For per-image and per-case, the sensitivity, specificity, and AUC (area under the receiver operating characteristic curve) were analyzed. At the per-fracture level, the controlled experiment set the baseline AP to 76.8% (95% CI: 76.1%, 77.4%), and the major experiment using FAMO as a preprocessor improved the AP to 77.4% (95% CI: 76.6%, 78.2%). At the per-image level, the sensitivity, specificity, and AUC were 61.9% (95% CI: 58.7%, 65.0%), 91.5% (95% CI: 89.5%, 93.3%), and 74.9% (95% CI: 74.1%, 75.7%), respectively, for the controlled experiment, and 64.5% (95% CI: 61.3%, 67.5%), 92.9% (95% CI: 91.0%, 94.5%), and 77.5% (95% CI: 76.5%, 78.5%), respectively, for the experiment with FAMO. At the per-case level, the sensitivity, specificity, and AUC were 74.9% (95% CI: 70.6%, 78.7%), 91.7%% (95% CI: 88.8%, 93.9%), and 85.7% (95% CI: 84.8%, 86.5%), respectively, for the controlled experiment, and 77.5% (95% CI: 73.3%, 81.1%), 93.4% (95% CI: 90.7%, 95.4%), and 86.5% (95% CI: 85.6%, 87.4%), respectively, for the experiment with FAMO. In conclusion, in bone fracture detection, FAMO is an effective preprocessor to enhance model performance by mitigating feature ambiguity in the network.
Collapse
Affiliation(s)
- Hui-Zhao Wu
- Department of Radiology, Third Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China
| | - Li-Feng Yan
- DeepWise AI Lab, Beijing, People's Republic of China
| | - Xiao-Qing Liu
- DeepWise AI Lab, Beijing, People's Republic of China
| | - Yi-Zhou Yu
- DeepWise AI Lab, Beijing, People's Republic of China
| | - Zuo-Jun Geng
- Department of Radiology, Second Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China.
| | - Wen-Juan Wu
- Department of Radiology, Third Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China.
| | - Chun-Qing Han
- Department of Radiology, Third Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China
| | - Yong-Qin Guo
- Department of Radiology, Third Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China
| | - Bu-Lang Gao
- Department of Radiology, Third Hospital of Hebei Medical University, Hebei Province, Shijiazhuang, 050000, People's Republic of China
| |
Collapse
|
68
|
Lorencin I, Baressi Šegota S, Anđelić N, Blagojević A, Šušteršić T, Protić A, Arsenijević M, Ćabov T, Filipović N, Car Z. Automatic Evaluation of the Lung Condition of COVID-19 Patients Using X-ray Images and Convolutional Neural Networks. J Pers Med 2021; 11:28. [PMID: 33406788 PMCID: PMC7824232 DOI: 10.3390/jpm11010028] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/24/2020] [Accepted: 12/27/2020] [Indexed: 12/23/2022] Open
Abstract
COVID-19 represents one of the greatest challenges in modern history. Its impact is most noticeable in the health care system, mostly due to the accelerated and increased influx of patients with a more severe clinical picture. These facts are increasing the pressure on health systems. For this reason, the aim is to automate the process of diagnosis and treatment. The research presented in this article conducted an examination of the possibility of classifying the clinical picture of a patient using X-ray images and convolutional neural networks. The research was conducted on the dataset of 185 images that consists of four classes. Due to a lower amount of images, a data augmentation procedure was performed. In order to define the CNN architecture with highest classification performances, multiple CNNs were designed. Results show that the best classification performances can be achieved if ResNet152 is used. This CNN has achieved AUCmacro¯ and AUCmicro¯ up to 0.94, suggesting the possibility of applying CNN to the classification of the clinical picture of COVID-19 patients using an X-ray image of the lungs. When higher layers are frozen during the training procedure, higher AUCmacro¯ and AUCmicro¯ values are achieved. If ResNet152 is utilized, AUCmacro¯ and AUCmicro¯ values up to 0.96 are achieved if all layers except the last 12 are frozen during the training procedure.
Collapse
Affiliation(s)
- Ivan Lorencin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (I.L.); (S.B.Š.); (N.A.)
| | - Sandi Baressi Šegota
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (I.L.); (S.B.Š.); (N.A.)
| | - Nikola Anđelić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (I.L.); (S.B.Š.); (N.A.)
| | - Anđela Blagojević
- Faculty of Engineering, University of Kragujevac, Sestre Janjić, 34000 Kragujevac, Serbia; (A.B.); (T.Š.); (N.F.)
- Bioengineering Research and Development Centre (BioIRC), Prvoslava Stojanovića 6, 34000 Kragujevac, Serbia
| | - Tijana Šušteršić
- Faculty of Engineering, University of Kragujevac, Sestre Janjić, 34000 Kragujevac, Serbia; (A.B.); (T.Š.); (N.F.)
- Bioengineering Research and Development Centre (BioIRC), Prvoslava Stojanovića 6, 34000 Kragujevac, Serbia
| | - Alen Protić
- Clinical Hospital Centre, Rijeka, Krešimirova ul. 42, 51000 Rijeka, Croatia;
- Faculty of Medicine, University of Rijeka, Ul. Braće Branchetta 20/1, 51000 Rijeka, Croatia
| | - Miloš Arsenijević
- Clinical Centre Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia;
- Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovića 69, 34000 Kragujevac, Serbia
| | - Tomislav Ćabov
- Faculty of Dental Medicine, University of Rijeka, Krešimirova ul. 40, 51000 Rijeka, Croatia;
| | - Nenad Filipović
- Faculty of Engineering, University of Kragujevac, Sestre Janjić, 34000 Kragujevac, Serbia; (A.B.); (T.Š.); (N.F.)
- Bioengineering Research and Development Centre (BioIRC), Prvoslava Stojanovića 6, 34000 Kragujevac, Serbia
| | - Zlatan Car
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (I.L.); (S.B.Š.); (N.A.)
| |
Collapse
|
69
|
Davendralingam N, Sebire NJ, Arthurs OJ, Shelmerdine SC. Artificial intelligence in paediatric radiology: Future opportunities. Br J Radiol 2021; 94:20200975. [PMID: 32941736 PMCID: PMC7774693 DOI: 10.1259/bjr.20200975] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 09/04/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence (AI) has received widespread and growing interest in healthcare, as a method to save time, cost and improve efficiencies. The high-performance statistics and diagnostic accuracies reported by using AI algorithms (with respect to predefined reference standards), particularly from image pattern recognition studies, have resulted in extensive applications proposed for clinical radiology, especially for enhanced image interpretation. Whilst certain sub-speciality areas in radiology, such as those relating to cancer screening, have received wide-spread attention in the media and scientific community, children's imaging has been hitherto neglected.In this article, we discuss a variety of possible 'use cases' in paediatric radiology from a patient pathway perspective where AI has either been implemented or shown early-stage feasibility, while also taking inspiration from the adult literature to propose potential areas for future development. We aim to demonstrate how a 'future, enhanced paediatric radiology service' could operate and to stimulate further discussion with avenues for research.
Collapse
Affiliation(s)
- Natasha Davendralingam
- Department of Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
| | | | | | | |
Collapse
|
70
|
Kijowski R, Liu F, Caliva F, Pedoia V. Deep Learning for Lesion Detection, Progression, and Prediction of Musculoskeletal Disease. J Magn Reson Imaging 2020; 52:1607-1619. [PMID: 31763739 PMCID: PMC7251925 DOI: 10.1002/jmri.27001] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022] Open
Abstract
Deep learning is one of the most exciting new areas in medical imaging. This review article provides a summary of the current clinical applications of deep learning for lesion detection, progression, and prediction of musculoskeletal disease on radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine. Deep-learning methods have shown success for estimating pediatric bone age, detecting fractures, and assessing the severity of osteoarthritis on radiographs. In particular, the high diagnostic performance of deep-learning approaches for estimating pediatric bone age and detecting fractures suggests that the new technology may soon become available for use in clinical practice. Recent studies have also documented the feasibility of using deep-learning methods for identifying a wide variety of pathologic abnormalities on CT and MRI including internal derangement, metastatic disease, infection, fractures, and joint degeneration. However, the detection of musculoskeletal disease on CT and especially MRI is challenging, as it often requires analyzing complex abnormalities on multiple slices of image datasets with different tissue contrasts. Thus, additional technical development is needed to create deep-learning methods for reliable and repeatable interpretation of musculoskeletal CT and MRI examinations. Furthermore, the diagnostic performance of all deep-learning methods for detecting and characterizing musculoskeletal disease must be evaluated in prospective studies using large image datasets acquired at different institutions with different imaging parameters and different imaging hardware before they can be implemented in clinical practice. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1607-1619.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Francesco Caliva
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Valentina Pedoia
- Department of Radiology, University of California at San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
71
|
Klontzas ME, Papadakis GZ, Marias K, Karantanas AH. Musculoskeletal trauma imaging in the era of novel molecular methods and artificial intelligence. Injury 2020; 51:2748-2756. [PMID: 32972725 DOI: 10.1016/j.injury.2020.09.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 08/14/2020] [Accepted: 09/15/2020] [Indexed: 02/08/2023]
Abstract
Over the past decade rapid advancements in molecular imaging (MI) and artificial intelligence (AI) have revolutionized traditional musculoskeletal radiology. Molecular imaging refers to the ability of various methods to in vivo characterize and quantify biological processes, at a molecular level. The extracted information provides the tools to understand the pathophysiology of diseases and thus to early detect, to accurately evaluate the extend and to apply and evaluate targeted treatments. At present, molecular imaging mainly involves CT, MRI, radionuclide, US, and optical imaging and has been reported in many clinical and preclinical studies. Although originally MI techniques targeted at central nervous system disorders, later on their value on musculoskeletal disorders was also studied in depth. Meaningful exploitation of the large volume of imaging data generated by molecular and conventional imaging techniques, requires state-of-the-art computational methods that enable rapid handling of large volumes of information. AI allows end-to-end training of computer algorithms to perform tasks encountered in everyday clinical practice including diagnosis, disease severity classification and image optimization. Notably, the development of deep learning algorithms has offered novel methods that enable intelligent processing of large imaging datasets in an attempt to automate decision-making in a wide variety of settings related to musculoskeletal trauma. Current applications of AI include the diagnosis of bone and soft tissue injuries, monitoring of the healing process and prediction of injuries in the professional sports setting. This review presents the current applications of novel MI techniques and methods and the emerging role of AI regarding the diagnosis and evaluation of musculoskeletal trauma.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Medical Imaging, Heraklion University Hospital, Crete, 70110, Greece; Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), N. Plastira 100, Vassilika Vouton 70013, Heraklion, Crete, Greece.
| | - Georgios Z Papadakis
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), N. Plastira 100, Vassilika Vouton 70013, Heraklion, Crete, Greece; Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013, Heraklion, Crete, Greece; Department of Radiology, School of Medicine, University of Crete, 70110 Greece.
| | - Kostas Marias
- Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013, Heraklion, Crete, Greece; Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410, Heraklion, Crete, Greece.
| | - Apostolos H Karantanas
- Department of Medical Imaging, Heraklion University Hospital, Crete, 70110, Greece; Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), N. Plastira 100, Vassilika Vouton 70013, Heraklion, Crete, Greece; Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013, Heraklion, Crete, Greece; Department of Radiology, School of Medicine, University of Crete, 70110 Greece.
| |
Collapse
|
72
|
Weikert T, Noordtzij LA, Bremerich J, Stieltjes B, Parmar V, Cyriac J, Sommer G, Sauter AW. Assessment of a Deep Learning Algorithm for the Detection of Rib Fractures on Whole-Body Trauma Computed Tomography. Korean J Radiol 2020; 21:891-899. [PMID: 32524789 PMCID: PMC7289702 DOI: 10.3348/kjr.2019.0653] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/12/2020] [Accepted: 02/19/2020] [Indexed: 12/03/2022] Open
Abstract
Objective To assess the diagnostic performance of a deep learning-based algorithm for automated detection of acute and chronic rib fractures on whole-body trauma CT. Materials and Methods We retrospectively identified all whole-body trauma CT scans referred from the emergency department of our hospital from January to December 2018 (n = 511). Scans were categorized as positive (n = 159) or negative (n = 352) for rib fractures according to the clinically approved written CT reports, which served as the index test. The bone kernel series (1.5-mm slice thickness) served as an input for a detection prototype algorithm trained to detect both acute and chronic rib fractures based on a deep convolutional neural network. It had previously been trained on an independent sample from eight other institutions (n = 11455). Results All CTs except one were successfully processed (510/511). The algorithm achieved a sensitivity of 87.4% and specificity of 91.5% on a per-examination level [per CT scan: rib fracture(s): yes/no]. There were 0.16 false-positives per examination (= 81/510). On a per-finding level, there were 587 true-positive findings (sensitivity: 65.7%) and 307 false-negatives. Furthermore, 97 true rib fractures were detected that were not mentioned in the written CT reports. A major factor associated with correct detection was displacement. Conclusion We found good performance of a deep learning-based prototype algorithm detecting rib fractures on trauma CT on a per-examination level at a low rate of false-positives per case. A potential area for clinical application is its use as a screening tool to avoid false-negative radiology reports.
Collapse
Affiliation(s)
- Thomas Weikert
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland.
| | - Luca Andre Noordtzij
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Jens Bremerich
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Bram Stieltjes
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Victor Parmar
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Joshy Cyriac
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Gregor Sommer
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Alexander Walter Sauter
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| |
Collapse
|
73
|
Zhou QQ, Tang W, Wang J, Hu ZC, Xia ZY, Zhang R, Fan X, Yong W, Yin X, Zhang B, Zhang H. Automatic detection and classification of rib fractures based on patients' CT images and clinical information via convolutional neural network. Eur Radiol 2020; 31:3815-3825. [PMID: 33201278 DOI: 10.1007/s00330-020-07418-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 10/13/2020] [Indexed: 11/30/2022]
Abstract
OBJECTIVE To develop a convolutional neural network (CNN) model for the automatic detection and classification of rib fractures in actual clinical practice based on cross-modal data (clinical information and CT images). MATERIALS In this retrospective study, CT images and clinical information (age, sex and medical history) from 1020 participants were collected and divided into a single-centre training set (n = 760; age: 55.8 ± 13.4 years; men: 500), a single-centre testing set (n = 134; age: 53.1 ± 14.3 years; men: 90), and two independent multicentre testing sets from two different hospitals (n = 62, age: 57.97 ± 11.88, men: 41; n = 64, age: 57.40 ± 13.36, men: 35). A Faster Region-based CNN (Faster R-CNN) model was applied to integrate CT images and clinical information. Then, a result merging technique was used to convert 2D inferences into 3D lesion results. The diagnostic performance was assessed on the basis of the receiver operating characteristic (ROC) curve, free-response ROC (fROC) curve, precision, recall (sensitivity), F1-score, and diagnosis time. The classification performance was evaluated in terms of the area under the ROC curve (AUC), sensitivity, and specificity. RESULTS The CNN model showed improved performance on fresh, healing, and old fractures and yielded good classification performance for all three categories when both clinical information and CT images were used compared to the use of CT images alone. Compared with experienced radiologists, the CNN model achieved higher sensitivity (mean sensitivity: 0.95 > 0.77, 0.89 > 0.61 and 0.80 > 0.55), comparable precision (mean precision: 0.91 > 0.87, 0.84 > 0.77, and 0.95 > 0.70), and a shorter diagnosis time (average reduction of 126.15 s). CONCLUSIONS A CNN model combining CT images and clinical information can automatically detect and classify rib fractures with good performance and feasibility in actual clinical practice. KEY POINTS • The developed convolutional neural network (CNN) performed better in fresh, healing, and old fractures and yielded a good classification performance in three categories, if both (clinical information and CT images) were used compared to CT images alone. • The CNN model had a higher sensitivity and matched precision in three categories than experienced radiologists with a shorter diagnosis time in actual clinical practice.
Collapse
Affiliation(s)
- Qing-Qing Zhou
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China
| | - Wen Tang
- Institute of Advanced Research, Beijing Infervision Technology Co Ltd, Yuanyang International Center, Beijing, 100025, China
| | - Jiashuo Wang
- Research Center of Biostatistics and Computational Pharmacy, China Pharmaceutical University, No.639, Long Mian Avenue, Nanjing, 211198, Jiangsu Province, China
| | - Zhang-Chun Hu
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China
| | - Zi-Yi Xia
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China
| | - Rongguo Zhang
- Institute of Advanced Research, Beijing Infervision Technology Co Ltd, Yuanyang International Center, Beijing, 100025, China
| | - Xinyi Fan
- Institute of Advanced Research, Beijing Infervision Technology Co Ltd, Yuanyang International Center, Beijing, 100025, China
| | - Wei Yong
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No. 68, Changle Road, Nanjing, 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No. 68, Changle Road, Nanjing, 210006, China
| | - Bing Zhang
- Department of Radiology, the Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - Hong Zhang
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China.
| |
Collapse
|
74
|
Zhu W, Zhang X, Fang S, Wang B, Zhu C. Deep Learning Improves Osteonecrosis Prediction of Femoral Head After Internal Fixation Using Hybrid Patient and Radiograph Variables. Front Med (Lausanne) 2020; 7:573522. [PMID: 33117834 PMCID: PMC7575786 DOI: 10.3389/fmed.2020.573522] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 09/01/2020] [Indexed: 01/09/2023] Open
Abstract
Femoral neck fractures (FNFs) are a great public health problem that leads to a high incidence of death and dysfunction. Osteonecrosis of the femoral head (ONFH) after internal fixation of FNF is a frequently reported complication and a major cause for reoperation. Early intervention can prevent osteonecrosis aggravation at the preliminary stage. However, at present, failure to diagnose asymptomatic ONFH after FNF fixation hinders effective intervention at early stages. The primary objective of this study was to develop a predictive model for postoperative ONFH using deep learning (DL) methods developed using plain X-ray radiographs and hybrid patient variables. A two-center retrospective study of patients who underwent closed reduction and cannulated screw fixation was performed. We trained a convolutional neural network (CNN) model using postoperative pelvic radiographs and the output regressive radiograph variables. A less experienced orthopedic doctor, and an experienced orthopedic doctor also evaluated and diagnosed the patients using postoperative pelvic radiographs. Hybrid nomograms were developed based on patient and radiograph variables to determine predictive performance. A total of 238 patients, including 95 ONFH patients and 143 non-ONFH patients, were included. A CNN model was trained using postoperative radiographs and output radiograph variables. The accuracy of the validation set was 0.873 for the CNN model, and the algorithm achieved an area under the curve (AUC) value of 0.912 for the prediction. The diagnostic and predictive ability of the algorithm was superior to that of the two doctors, based on the postoperative X-rays. The addition of DL-based radiograph variables to the clinical nomogram improved predictive performance, resulting in an AUC of 0.948 (95% CI, 0.920-0.976) and better calibration. The decision curve analysis showed that adding the DL increased the clinical usefulness of the nomogram compared with a clinical approach alone. In conclusion, we constructed a DL facilitated nomogram that incorporated a hybrid of radiograph and patient variables, which can be used to improve the prediction of preoperative osteonecrosis of the femoral head after internal fixation.
Collapse
Affiliation(s)
- Wanbo Zhu
- Department of Orthopedics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Department of Orthopedics, Affiliated Anhui Provincial Hospital of Anhui Medical University, Hefei, China
| | - Xianzuo Zhang
- Department of Orthopedics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Shiyuan Fang
- Department of Orthopedics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Bing Wang
- School of Electrical and Information Engineering, Anhui University of Technology, Ma'anshan, China
| | - Chen Zhu
- Department of Orthopedics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| |
Collapse
|
75
|
Automated detection of pulmonary embolism in CT pulmonary angiograms using an AI-powered algorithm. Eur Radiol 2020; 30:6545-6553. [DOI: 10.1007/s00330-020-06998-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/12/2020] [Accepted: 05/29/2020] [Indexed: 12/19/2022]
|
76
|
Kaddioui H, Duong L, Joncas J, Bellefleur C, Nahle I, Chémaly O, Nault ML, Parent S, Grimard G, Labelle H. Convolutional Neural Networks for Automatic Risser Stage Assessment. Radiol Artif Intell 2020; 2:e180063. [PMID: 33937822 PMCID: PMC8082353 DOI: 10.1148/ryai.2020180063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 01/20/2020] [Accepted: 01/27/2020] [Indexed: 11/11/2022]
Abstract
PURPOSE To develop an automatic method for the assessment of the Risser stage using deep learning that could be used in the management panel of adolescent idiopathic scoliosis (AIS). MATERIALS AND METHODS In this institutional review board approved-study, a total of 1830 posteroanterior radiographs of patients with AIS (age range, 10-18 years, 70% female) were collected retrospectively and graded manually by six trained readers using the United States Risser staging system. Each radiograph was preprocessed and cropped to include the entire pelvic region. A convolutional neural network was trained to automatically grade conventional radiographs according to the Risser classification. The network was then validated by comparing its accuracy against the interobserver variability of six trained graders from the authors' institution using the Fleiss κ statistical measure. RESULTS Overall agreement between the six observers was fair, with a κ coefficient of 0.65 for the experienced graders and agreement of 74.5%. The automatic grading method obtained a κ coefficient of 0.72, which is a substantial agreement with the ground truth, and an overall accuracy of 78.0%. CONCLUSION The high accuracy of the model presented here compared with human readers suggests that this work may provide a new method for standardization of Risser grading. The model could assist physicians with the task, as well as provide additional insights in the assessment of bone maturity based on radiographs.© RSNA, 2020.
Collapse
Affiliation(s)
- Houda Kaddioui
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Luc Duong
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Julie Joncas
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Christian Bellefleur
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Imad Nahle
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Olivier Chémaly
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Marie-Lyne Nault
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Stefan Parent
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Guy Grimard
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| | - Hubert Labelle
- From the Department of Software and IT Engineering, Ecole de Technologie Supérieure, 1100 rue Notre-Dame Ouest, Montréal, QC, Canada H3C 1K3 (H.K., L.D.); Division of Orthopedics, Sainte-Justine Hospital, Montréal, Canada (J.J., C.B., I.N., O.C., S.P., G.G., H.L.); and Department of Surgery, Université de Montréal, Montréal, Canada (M.L.N., S.P., G.G., H.L.)
| |
Collapse
|
77
|
Ooi GSK, Liew C, Ting DSW, Lim TCC. Artificial Intelligence: A Singapore Response. ANNALS OF THE ACADEMY OF MEDICINE, SINGAPORE 2020. [DOI: 10.47102/annals-acadmed.sg.2019208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
78
|
Kulkarni S, Seneviratne N, Baig MS, Khan AHA. Artificial Intelligence in Medicine: Where Are We Now? Acad Radiol 2020; 27:62-70. [PMID: 31636002 DOI: 10.1016/j.acra.2019.10.001] [Citation(s) in RCA: 115] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 09/27/2019] [Accepted: 10/01/2019] [Indexed: 12/20/2022]
Abstract
Artificial intelligence in medicine has made dramatic progress in recent years. However, much of this progress is seemingly scattered, lacking a cohesive structure for the discerning observer. In this article, we will provide an up-to-date review of artificial intelligence in medicine, with a specific focus on its application to radiology, pathology, ophthalmology, and dermatology. We will discuss a range of selected papers that illustrate the potential uses of artificial intelligence in a technologically advanced future.
Collapse
Affiliation(s)
- Sagar Kulkarni
- Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia 19104, PA; Barts and The London School of Medicine and Dentistry, London, United Kingdom.
| | - Nuran Seneviratne
- Department of Geriatric Medicine, The Princess Alexandra Hospital NHS Trust, Harlow, United Kingdom
| | - Mirza Shaheer Baig
- Barts and The London School of Medicine and Dentistry, London, United Kingdom
| | | |
Collapse
|
79
|
Han SS, Moon IJ, Lim W, Suh IS, Lee SY, Na JI, Kim SH, Chang SE. Keratinocytic Skin Cancer Detection on the Face Using Region-Based Convolutional Neural Network. JAMA Dermatol 2020; 156:29-37. [PMID: 31799995 PMCID: PMC6902187 DOI: 10.1001/jamadermatol.2019.3807] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 10/14/2019] [Indexed: 02/06/2023]
Abstract
Importance Detection of cutaneous cancer on the face using deep-learning algorithms has been challenging because various anatomic structures create curves and shades that confuse the algorithm and can potentially lead to false-positive results. Objective To evaluate whether an algorithm can automatically locate suspected areas and predict the probability of a lesion being malignant. Design, Setting, and Participants Region-based convolutional neural network technology was used to create 924 538 possible lesions by extracting nodular benign lesions from 182 348 clinical photographs. After manually or automatically annotating these possible lesions based on image findings, convolutional neural networks were trained with 1 106 886 image crops to locate and diagnose cancer. Validation data sets (2844 images from 673 patients; mean [SD] age, 58.2 [19.9] years; 308 men [45.8%]; 185 patients with malignant tumors, 305 with benign tumors, and 183 free of tumor) were obtained from 3 hospitals between January 1, 2010, and September 30, 2018. Main Outcomes and Measures The area under the receiver operating characteristic curve, F1 score (mean of precision and recall; range, 0.000-1.000), and Youden index score (sensitivity + specificity -1; 0%-100%) were used to compare the performance of the algorithm with that of the participants. Results The algorithm analyzed a mean (SD) of 4.2 (2.4) photographs per patient and reported the malignancy score according to the highest malignancy output. The area under the receiver operating characteristic curve for the validation data set (673 patients) was 0.910. At a high-sensitivity cutoff threshold, the sensitivity and specificity of the model with the 673 patients were 76.8% and 90.6%, respectively. With the test partition (325 images; 80 patients), the performance of the algorithm was compared with the performance of 13 board-certified dermatologists, 34 dermatology residents, 20 nondermatologic physicians, and 52 members of the general public with no medical background. When the disease screening performance was evaluated at high sensitivity areas using the F1 score and Youden index score, the algorithm showed a higher F1 score (0.831 vs 0.653 [0.126], P < .001) and Youden index score (0.675 vs 0.417 [0.124], P < .001) than that of nondermatologic physicians. The accuracy of the algorithm was comparable with that of dermatologists (F1 score, 0.831 vs 0.835 [0.040]; Youden index score, 0.675 vs 0.671 [0.100]). Conclusions and Relevance The results of the study suggest that the algorithm could localize and diagnose skin cancer without preselection of suspicious lesions by dermatologists.
Collapse
Affiliation(s)
| | - Ik Jun Moon
- Department of Dermatology, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | | | - In Suck Suh
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Sam Yong Lee
- Department of Plastic and Reconstructive Surgery, Chonnam National University Medical School, Gwangju, Korea
| | - Jung-Im Na
- Department of Dermatology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Seong Hwan Kim
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Sung Eun Chang
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| |
Collapse
|
80
|
Murphy A, Liszewski B. Artificial Intelligence and the Medical Radiation Profession: How Our Advocacy Must Inform Future Practice. J Med Imaging Radiat Sci 2019; 50:S15-S19. [DOI: 10.1016/j.jmir.2019.09.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Revised: 08/28/2019] [Accepted: 09/03/2019] [Indexed: 02/06/2023]
|
81
|
|
82
|
Sanders JW, Fletcher JR, Frank SJ, Liu HL, Johnson JM, Zhou Z, Chen HSM, Venkatesan AM, Kudchadker RJ, Pagel MD, Ma J. Deep learning application engine (DLAE): Development and integration of deep learning algorithms in medical imaging. SOFTWAREX 2019; 10:100347. [PMID: 34113706 PMCID: PMC8188855 DOI: 10.1016/j.softx.2019.100347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Herein we introduce a deep learning (DL) application engine (DLAE) system concept, present potential uses of it, and describe pathways for its integration in clinical workflows. An open-source software application was developed to provide a code-free approach to DL for medical imaging applications. DLAE supports several DL techniques used in medical imaging, including convolutional neural networks, fully convolutional networks, generative adversarial networks, and bounding box detectors. Several example applications using clinical images were developed and tested to demonstrate the capabilities of DLAE. Additionally, a model deployment example was demonstrated in which DLAE was used to integrate two trained models into a commercial clinical software package.
Collapse
Affiliation(s)
- Jeremiah W. Sanders
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Justin R. Fletcher
- Odyssey Systems Consulting, LLC, 550 Lipoa Parkway, Kihei, Maui, HI, United States of America
| | - Steven J. Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1422, Houston, TX 77030, United States of America
| | - Ho-Ling Liu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Jason M. Johnson
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Henry Szu-Meng Chen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Aradhana M. Venkatesan
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Rajat J. Kudchadker
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1420, Houston, TX 77030, United States of America
| | - Mark D. Pagel
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1907, Houston, TX 77030, United States of America
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| |
Collapse
|