101
|
Yin XX, Sun L, Fu Y, Lu R, Zhang Y. U-Net-Based Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4189781. [PMID: 35463660 PMCID: PMC9033381 DOI: 10.1155/2022/4189781] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 03/02/2022] [Accepted: 03/23/2022] [Indexed: 11/17/2022]
Abstract
Deep learning has been extensively applied to segmentation in medical imaging. U-Net proposed in 2015 shows the advantages of accurate segmentation of small targets and its scalable network architecture. With the increasing requirements for the performance of segmentation in medical imaging in recent years, U-Net has been cited academically more than 2500 times. Many scholars have been constantly developing the U-Net architecture. This paper summarizes the medical image segmentation technologies based on the U-Net structure variants concerning their structure, innovation, efficiency, etc.; reviews and categorizes the related methodology; and introduces the loss functions, evaluation parameters, and modules commonly applied to segmentation in medical imaging, which will provide a good reference for the future research.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
- College of Engineering and Science, Victoria University, Melbourne, VIC 8001, Australia
| | - Le Sun
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yuhan Fu
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| | - Ruiliang Lu
- Department of Radiology, The First People's Hospital of Foshan, Foshan 528000, China
| | - Yanchun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
102
|
Clever Hans effect found in a widely used brain tumour MRI dataset. Med Image Anal 2022; 77:102368. [DOI: 10.1016/j.media.2022.102368] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 12/19/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
|
103
|
What Is Needed for Artificial Intelligence to Be Trusted? Am J Med 2022; 135:421-423. [PMID: 34861193 DOI: 10.1016/j.amjmed.2021.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/24/2022]
|
104
|
Zhao W, Kang Q, Qian F, Li K, Zhu J, Ma B. Convolutional Neural Network-Based Computer-Assisted Diagnosis of Hashimoto's Thyroiditis on Ultrasound. J Clin Endocrinol Metab 2022; 107:953-963. [PMID: 34907442 PMCID: PMC8947219 DOI: 10.1210/clinem/dgab870] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Indexed: 02/05/2023]
Abstract
PURPOSE This study investigates the efficiency of deep learning models in the automated diagnosis of Hashimoto's thyroiditis (HT) using real-world ultrasound data from ultrasound examinations by computer-assisted diagnosis (CAD) with artificial intelligence. METHODS We retrospectively collected ultrasound images from patients with and without HT from 2 hospitals in China between September 2008 and February 2018. Images were divided into a training set (80%) and a validation set (20%). We ensembled 9 convolutional neural networks (CNNs) as the final model (CAD-HT) for HT classification. The model's diagnostic performance was validated and compared to 2 hospital validation sets. We also compared the accuracy of CAD-HT against seniors/junior radiologists. Subgroup analysis of CAD-HT performance for different thyroid hormone levels (hyperthyroidism, hypothyroidism, and euthyroidism) was also evaluated. RESULTS 39 280 ultrasound images from 21 118 patients were included in this study. The accuracy, sensitivity, and specificity of the HT-CAD model were 0.892, 0.890, and 0.895, respectively. HT-CAD performance between 2 hospitals was not significantly different. The HT-CAD model achieved a higher performance (P < 0.001) when compared to senior radiologists, with a nearly 9% accuracy improvement. HT-CAD had almost similar accuracy (range 0.87-0.894) for the 3 subgroups based on thyroid hormone level. CONCLUSION The HT-CAD strategy based on CNN significantly improved the radiologists' diagnostic accuracy of HT. Our model demonstrates good performance and robustness in different hospitals and for different thyroid hormone levels.
Collapse
Affiliation(s)
- Wanjun Zhao
- Department of Thyroid Surgery, West China Hospital, Sichuan University, Chengdu, China
| | - Qingbo Kang
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Feiyan Qian
- Department of Rehabilitation, Shaoxing Central Hospital, Shaoxing, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jingqiang Zhu
- Department of Thyroid Surgery, West China Hospital, Sichuan University, Chengdu, China
- Correspondence: Jingqiang Zhu, MD, Department of Thyroid Surgery, West China Hospital, Sichuan University, Chengdu 610041, China. ; or Buyun Ma, MD, Department of Ultrasonography, West China Hospital of Sichuan University, Chengdu 610041, China.
| | - Buyun Ma
- Department of Ultrasonography, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
105
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:jpm12030480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and “motivate” the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
- Correspondence: (F.S.); (T.P.)
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- Correspondence: (F.S.); (T.P.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
106
|
Hurt B, Rubel MA, Masutani EM, Jacobs K, Hahn L, Horowitz M, Kligerman S, Hsiao A. Radiologist-supervised Transfer Learning: Improving Radiographic Localization of Pneumonia and Prognostication of Patients With COVID-19. J Thorac Imaging 2022; 37:90-99. [PMID: 34710891 PMCID: PMC8863580 DOI: 10.1097/rti.0000000000000618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE To assess the potential of a transfer learning strategy leveraging radiologist supervision to enhance convolutional neural network-based (CNN) localization of pneumonia on radiographs and to further assess the prognostic value of CNN severity quantification on patients evaluated for COVID-19 pneumonia, for whom severity on the presenting radiograph is a known predictor of mortality and intubation. MATERIALS AND METHODS We obtained an initial CNN previously trained to localize pneumonia along with 25,684 radiographs used for its training. We additionally curated 1466 radiographs from patients who had a computed tomography (CT) performed on the same day. Regional likelihoods of pneumonia were then annotated by cardiothoracic radiologists, referencing these CTs. Combining data, a preexisting CNN was fine-tuned using transfer learning. Whole-image and regional performance of the updated CNN was assessed using receiver-operating characteristic area under the curve and Dice. Finally, the value of CNN measurements was assessed with survival analysis on 203 patients with COVID-19 and compared against modified radiographic assessment of lung edema (mRALE) score. RESULTS Pneumonia detection area under the curve improved on both internal (0.756 to 0.841) and external (0.864 to 0.876) validation data. Dice overlap also improved, particularly in the lung bases (R: 0.121 to 0.433, L: 0.111 to 0.486). There was strong correlation between radiologist mRALE score and CNN fractional area of involvement (ρ=0.85). Survival analysis showed similar, strong prognostic ability of the CNN and mRALE for mortality, likelihood of intubation, and duration of hospitalization among patients with COVID-19. CONCLUSIONS Radiologist-supervised transfer learning can enhance the ability of CNNs to localize and quantify the severity of disease. Closed-loop systems incorporating radiologists may be beneficial for continued improvement of artificial intelligence algorithms.
Collapse
Affiliation(s)
- Brian Hurt
- Department of Radiology, University of California San Diego School of Medicine
| | - Meagan A Rubel
- Department of Radiology, University of California San Diego School of Medicine
| | - Evan M Masutani
- Department of Radiology, University of California San Diego School of Medicine
- Department of Bioengineering, University of California, San Diego, San Diego, CA
| | - Kathleen Jacobs
- Department of Radiology, University of California San Diego School of Medicine
| | - Lewis Hahn
- Department of Radiology, University of California San Diego School of Medicine
| | - Michael Horowitz
- Department of Radiology, University of California San Diego School of Medicine
| | - Seth Kligerman
- Department of Radiology, University of California San Diego School of Medicine
| | - Albert Hsiao
- Department of Radiology, University of California San Diego School of Medicine
| |
Collapse
|
107
|
Booth TC, Wiegers EC, Warnert EAH, Schmainda KM, Riemer F, Nechifor RE, Keil VC, Hangel G, Figueiredo P, Álvarez-Torres MDM, Henriksen OM. High-Grade Glioma Treatment Response Monitoring Biomarkers: A Position Statement on the Evidence Supporting the Use of Advanced MRI Techniques in the Clinic, and the Latest Bench-to-Bedside Developments. Part 2: Spectroscopy, Chemical Exchange Saturation, Multiparametric Imaging, and Radiomics. Front Oncol 2022; 11:811425. [PMID: 35340697 PMCID: PMC8948428 DOI: 10.3389/fonc.2021.811425] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/28/2021] [Indexed: 01/16/2023] Open
Abstract
Objective To summarize evidence for use of advanced MRI techniques as monitoring biomarkers in the clinic, and to highlight the latest bench-to-bedside developments. Methods The current evidence regarding the potential for monitoring biomarkers was reviewed and individual modalities of metabolism and/or chemical composition imaging discussed. Perfusion, permeability, and microstructure imaging were similarly analyzed in Part 1 of this two-part review article and are valuable reading as background to this article. We appraise the clinic readiness of all the individual modalities and consider methodologies involving machine learning (radiomics) and the combination of MRI approaches (multiparametric imaging). Results The biochemical composition of high-grade gliomas is markedly different from healthy brain tissue. Magnetic resonance spectroscopy allows the simultaneous acquisition of an array of metabolic alterations, with choline-based ratios appearing to be consistently discriminatory in treatment response assessment, although challenges remain despite this being a mature technique. Promising directions relate to ultra-high field strengths, 2-hydroxyglutarate analysis, and the use of non-proton nuclei. Labile protons on endogenous proteins can be selectively targeted with chemical exchange saturation transfer to give high resolution images. The body of evidence for clinical application of amide proton transfer imaging has been building for a decade, but more evidence is required to confirm chemical exchange saturation transfer use as a monitoring biomarker. Multiparametric methodologies, including the incorporation of nuclear medicine techniques, combine probes measuring different tumor properties. Although potentially synergistic, the limitations of each individual modality also can be compounded, particularly in the absence of standardization. Machine learning requires large datasets with high-quality annotation; there is currently low-level evidence for monitoring biomarker clinical application. Conclusion Advanced MRI techniques show huge promise in treatment response assessment. The clinical readiness analysis highlights that most monitoring biomarkers require standardized international consensus guidelines, with more facilitation regarding technique implementation and reporting in the clinic.
Collapse
Affiliation(s)
- Thomas C. Booth
- School of Biomedical Engineering and Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
- Department of Neuroradiology, King’s College Hospital NHS Foundation Trust, London, United Kingdom
| | - Evita C. Wiegers
- Department of Radiology, University Medical Center Utrecht, Utrecht, Netherlands
| | | | - Kathleen M. Schmainda
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Frank Riemer
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology, Haukeland University Hospital, Bergen, Norway
| | - Ruben E. Nechifor
- Department of Clinical Psychology and Psychotherapy International Institute for the Advanced Studies of Psychotherapy and Applied Mental Health, Babes-Bolyai University, Cluj-Napoca, Romania
| | - Vera C. Keil
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, Netherlands
| | - Gilbert Hangel
- Department of Neurosurgery & High-Field MR Centre, Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Vienna, Austria
| | - Patrícia Figueiredo
- Department of Bioengineering and Institute for Systems and Robotics - Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | | | - Otto M. Henriksen
- Department of Clinical Physiology, Nuclear medicine and PET, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
108
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
109
|
An ISHAP-based interpretation-model-guided classification method for malignant pulmonary nodule. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107778] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
110
|
Mutasa S, Yi PH. Deciphering musculoskeletal artificial intelligence for clinical applications: how do I get started? Skeletal Radiol 2022; 51:271-278. [PMID: 34191083 DOI: 10.1007/s00256-021-03850-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 06/09/2021] [Accepted: 06/21/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) represents a broad category of algorithms for which deep learning is currently the most impactful. When electing to begin the process of building an adequate fundamental knowledge base allowing them to decipher machine learning research and algorithms, clinical musculoskeletal radiologists currently have few options to turn to. In this article, we provide an introduction to the vital terminology to understand, how to make sense of data splits and regularization, an introduction to the statistical analyses used in AI research, a primer on what deep learning can or cannot do, and a brief overview of clinical integration methods. Our goal is to improve the readers' understanding of this field.
Collapse
Affiliation(s)
- Simukayi Mutasa
- The Center of Artificial Intelligence in Medical Imaging, Division of Musculoskeletal Imaging, The University of California At Irvine, 101 The City Dr S, Orange, CA, 92868, USA.
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland Intelligent Imaging Center, University of Maryland School of Medicine, 22 South Greene Street, First Floor, Baltimore, MD, 21201, USA
| |
Collapse
|
111
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|
112
|
Abstract
Artificial intelligence (AI) has illuminated a clear path towards an evolving health-care system replete with enhanced precision and computing capabilities. Medical imaging analysis can be strengthened by machine learning as the multidimensional data generated by imaging naturally lends itself to hierarchical classification. In this Review, we describe the role of machine intelligence in image-based endocrine cancer diagnostics. We first provide a brief overview of AI and consider its intuitive incorporation into the clinical workflow. We then discuss how AI can be applied for the characterization of adrenal, pancreatic, pituitary and thyroid masses in order to support clinicians in their diagnostic interpretations. This Review also puts forth a number of key evaluation criteria for machine learning in medicine that physicians can use in their appraisals of these algorithms. We identify mitigation strategies to address ongoing challenges around data availability and model interpretability in the context of endocrine cancer diagnosis. Finally, we delve into frontiers in systems integration for AI, discussing automated pipelines and evolving computing platforms that leverage distributed, decentralized and quantum techniques.
Collapse
Affiliation(s)
| | - Ihab R Kamel
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Harrison X Bai
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
113
|
Booth TC, Grzeda M, Chelliah A, Roman A, Al Busaidi A, Dragos C, Shuaib H, Luis A, Mirchandani A, Alparslan B, Mansoor N, Lavrador J, Vergani F, Ashkan K, Modat M, Ourselin S. Imaging Biomarkers of Glioblastoma Treatment Response: A Systematic Review and Meta-Analysis of Recent Machine Learning Studies. Front Oncol 2022; 12:799662. [PMID: 35174084 PMCID: PMC8842649 DOI: 10.3389/fonc.2022.799662] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/03/2022] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVE Monitoring biomarkers using machine learning (ML) may determine glioblastoma treatment response. We systematically reviewed quality and performance accuracy of recently published studies. METHODS Following Preferred Reporting Items for Systematic Reviews and Meta-Analysis: Diagnostic Test Accuracy, we extracted articles from MEDLINE, EMBASE and Cochrane Register between 09/2018-01/2021. Included study participants were adults with glioblastoma having undergone standard treatment (maximal resection, radiotherapy with concomitant and adjuvant temozolomide), and follow-up imaging to determine treatment response status (specifically, distinguishing progression/recurrence from progression/recurrence mimics, the target condition). Using Quality Assessment of Diagnostic Accuracy Studies Two/Checklist for Artificial Intelligence in Medical Imaging, we assessed bias risk and applicability concerns. We determined test set performance accuracy (sensitivity, specificity, precision, F1-score, balanced accuracy). We used a bivariate random-effect model to determine pooled sensitivity, specificity, area-under the receiver operator characteristic curve (ROC-AUC). Pooled measures of balanced accuracy, positive/negative likelihood ratios (PLR/NLR) and diagnostic odds ratio (DOR) were calculated. PROSPERO registered (CRD42021261965). RESULTS Eighteen studies were included (1335/384 patients for training/testing respectively). Small patient numbers, high bias risk, applicability concerns (particularly confounding in reference standard and patient selection) and low level of evidence, allow limited conclusions from studies. Ten studies (10/18, 56%) included in meta-analysis gave 0.769 (0.649-0.858) sensitivity [pooled (95% CI)]; 0.648 (0.749-0.532) specificity; 0.706 (0.623-0.779) balanced accuracy; 2.220 (1.560-3.140) PLR; 0.366 (0.213-0.572) NLR; 6.670 (2.800-13.500) DOR; 0.765 ROC-AUC. CONCLUSION ML models using MRI features to distinguish between progression and mimics appear to demonstrate good diagnostic performance. However, study quality and design require improvement.
Collapse
Affiliation(s)
- Thomas C. Booth
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Mariusz Grzeda
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Alysha Chelliah
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Andrei Roman
- Department of Radiology, Guy’s & St. Thomas’ National Health Service Foundation Trust, London, United Kingdom
- Department of Radiology, The Oncology Institute “Prof. Dr. Ion Chiricuţă” Cluj-Napoca, Cluj-Napoca, Romania
| | - Ayisha Al Busaidi
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Carmen Dragos
- Department of Radiology, Buckinghamshire Healthcare National Health Service Trust, Amersham, United Kingdom
| | - Haris Shuaib
- Department of Medical Physics, Guy’s & St. Thomas’ National Health Service Foundation Trust, London, United Kingdom
- Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Aysha Luis
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Ayesha Mirchandani
- Department of Radiology, Cambridge University Hospitals National Health Service Foundation Trust, Cambridge, United Kingdom
| | - Burcu Alparslan
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
- Department of Radiology, Kocaeli University, İzmit, Turkey
| | - Nina Mansoor
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Jose Lavrador
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Francesco Vergani
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Keyoumars Ashkan
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| |
Collapse
|
114
|
Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med 2022; 28:31-38. [PMID: 35058619 DOI: 10.1038/s41591-021-01614-0] [Citation(s) in RCA: 502] [Impact Index Per Article: 251.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 11/05/2021] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track and share key developments in medical AI. We cover prospective studies and advances in medical image analysis, which have reduced the gap between research and deployment. We also address several promising avenues for novel medical AI research, including non-image data sources, unconventional problem formulations and human-AI collaboration. Finally, we consider serious technical and ethical challenges in issues spanning from data scarcity to racial bias. As these challenges are addressed, AI's potential may be realized, making healthcare more accurate, efficient and accessible for patients worldwide.
Collapse
Affiliation(s)
- Pranav Rajpurkar
- Department of Biomedical Informatics, Harvard University, Cambridge, MA, USA
| | - Emma Chen
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Oishi Banerjee
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Eric J Topol
- Scripps Translational Science Institute, San Diego, CA, USA.
| |
Collapse
|
115
|
Zamani J, Sadr A, Javadi AH. Diagnosis of early mild cognitive impairment using a multiobjective optimization algorithm based on T1-MRI data. Sci Rep 2022; 12:1020. [PMID: 35046444 PMCID: PMC8770462 DOI: 10.1038/s41598-022-04943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 01/04/2022] [Indexed: 12/03/2022] Open
Abstract
Alzheimer's disease (AD) is the most prevalent form of dementia. The accurate diagnosis of AD, especially in the early phases is very important for timely intervention. It has been suggested that brain atrophy, as measured with structural magnetic resonance imaging (sMRI), can be an efficacy marker of neurodegeneration. While classification methods have been successful in diagnosis of AD, the performance of such methods have been very poor in diagnosis of those in early stages of mild cognitive impairment (EMCI). Therefore, in this study we investigated whether optimisation based on evolutionary algorithms (EA) can be an effective tool in diagnosis of EMCI as compared to cognitively normal participants (CNs). Structural MRI data for patients with EMCI (n = 54) and CN participants (n = 56) was extracted from Alzheimer's disease Neuroimaging Initiative (ADNI). Using three automatic brain segmentation methods, we extracted volumetric parameters as input to the optimisation algorithms. Our method achieved classification accuracy of greater than 93%. This accuracy level is higher than the previously suggested methods of classification of CN and EMCI using a single- or multiple modalities of imaging data. Our results show that with an effective optimisation method, a single modality of biomarkers can be enough to achieve a high classification accuracy.
Collapse
Affiliation(s)
- Jafar Zamani
- School of Electrical Engineering, Iran University of Science and Technology, Narmak, Tehran, Iran
| | - Ali Sadr
- School of Electrical Engineering, Iran University of Science and Technology, Narmak, Tehran, Iran.
| | - Amir-Homayoun Javadi
- School of Psychology, Keynes College, University of Kent, Canterbury, UK.
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
116
|
Automated grading of enlarged perivascular spaces in clinical imaging data of an acute stroke cohort using an interpretable, 3D deep learning framework. Sci Rep 2022; 12:788. [PMID: 35039524 PMCID: PMC8764081 DOI: 10.1038/s41598-021-04287-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 12/13/2021] [Indexed: 01/10/2023] Open
Abstract
Enlarged perivascular spaces (EPVS), specifically in stroke patients, has been shown to strongly correlate with other measures of small vessel disease and cognitive impairment at 1 year follow-up. Typical grading of EPVS is often challenging and time consuming and is usually based on a subjective visual rating scale. The purpose of the current study was to develop an interpretable, 3D neural network for grading enlarged perivascular spaces (EPVS) severity at the level of the basal ganglia using clinical-grade imaging in a heterogenous acute stroke cohort, in the context of total cerebral small vessel disease (CSVD) burden. T2-weighted images from a retrospective cohort of 262 acute stroke patients, collected in 2015 from 5 regional medical centers, were used for analyses. Patients were given a label of 0 for none-to-mild EPVS (< 10) and 1 for moderate-to-severe EPVS (≥ 10). A three-dimensional residual network of 152 layers (3D-ResNet-152) was created to predict EPVS severity and 3D gradient class activation mapping (3DGradCAM) was used for visual interpretation of results. Our model achieved an accuracy 0.897 and area-under-the-curve of 0.879 on a hold-out test set of 15% of the total cohort (n = 39). 3DGradCAM showed areas of focus that were in physiologically valid locations, including other prevalent areas for EPVS. These maps also suggested that distribution of class activation values is indicative of the confidence in the model's decision. Potential clinical implications of our results include: (1) support for feasibility of automated of EPVS scoring using clinical-grade neuroimaging data, potentially alleviating rater subjectivity and improving confidence of visual rating scales, and (2) demonstration that explainable models are critical for clinical translation.
Collapse
|
117
|
Mattogno PP, Caccavella VM, Giordano M, D'Alessandris QG, Chiloiro S, Tariciotti L, Olivi A, Lauretti L. Interpretable Machine Learning-Based Prediction of Intraoperative Cerebrospinal Fluid Leakage in Endoscopic Transsphenoidal Pituitary Surgery: A Pilot Study. J Neurol Surg B Skull Base 2022; 83:485-495. [PMID: 36091632 PMCID: PMC9462964 DOI: 10.1055/s-0041-1740621] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 11/12/2021] [Indexed: 01/18/2023] Open
Abstract
Purpose Transsphenoidal surgery (TSS) for pituitary adenomas can be complicated by the occurrence of intraoperative cerebrospinal fluid (CSF) leakage (IOL). IOL significantly affects the course of surgery predisposing to the development of postoperative CSF leakage, a major source of morbidity and mortality in the postoperative period. The authors trained and internally validated the Random Forest (RF) prediction model to preoperatively identify patients at high risk for IOL. A locally interpretable model-agnostic explanations (LIME) algorithm is employed to elucidate the main drivers behind each machine learning (ML) model prediction. Methods The data of 210 patients who underwent TSS were collected; first, risk factors for IOL were identified via conventional statistical methods (multivariable logistic regression). Then, the authors trained, optimized, and audited a RF prediction model. Results IOL reported in 45 patients (21.5%). The recursive feature selection algorithm identified the following variables as the most significant determinants of IOL: Knosp's grade, sellar Hardy's grade, suprasellar Hardy's grade, tumor diameter (on X, Y, and Z axes), intercarotid distance, and secreting status (nonfunctioning and growth hormone [GH] secreting). Leveraging the predictive values of these variables, the RF prediction model achieved an area under the curve (AUC) of 0.83 (95% confidence interval [CI]: 0.78; 0.86), significantly outperforming the multivariable logistic regression model (AUC = 0.63). Conclusion A RF model that reliably identifies patients at risk for IOL was successfully trained and internally validated. ML-based prediction models can predict events that were previously judged nearly unpredictable; their deployment in clinical practice may result in improved patient care and reduced postoperative morbidity and healthcare costs.
Collapse
Affiliation(s)
- Pier Paolo Mattogno
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| | - Valerio M. Caccavella
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| | - Martina Giordano
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy,Address for correspondence Martina Giordano, MD Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli Istituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro CuoreLargo Agostino Gemelli, 8 00168 RomeItaly
| | - Quintino G. D'Alessandris
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| | - Sabrina Chiloiro
- Department of Endocrinology, Fondazione Policlinico Universitario A. Gemelli Istituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| | - Leonardo Tariciotti
- Unit of Neurosurgery, Fondazione Istituto di Ricovero e Cura a Carattere Scientifico Cà Granda Ospedale Maggiore Policlinico, Milan, Italy,University of Milan, Milan, Italy
| | - Alessandro Olivi
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| | - Liverana Lauretti
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemell iIstituto di Ricovero e Cura a Carattere Scientifico Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
118
|
Choi JW, Cho YJ, Ha JY, Lee YY, Koh SY, Seo JY, Choi YH, Cheon JE, Phi JH, Kim I, Yang J, Kim WS. Deep Learning-Assisted Diagnosis of Pediatric Skull Fractures on Plain Radiographs. Korean J Radiol 2022; 23:343-354. [PMID: 35029078 PMCID: PMC8876653 DOI: 10.3348/kjr.2021.0449] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/27/2021] [Accepted: 11/07/2021] [Indexed: 11/21/2022] Open
Abstract
Objective To develop and evaluate a deep learning-based artificial intelligence (AI) model for detecting skull fractures on plain radiographs in children. Materials and Methods This retrospective multi-center study consisted of a development dataset acquired from two hospitals (n = 149 and 264) and an external test set (n = 95) from a third hospital. Datasets included children with head trauma who underwent both skull radiography and cranial computed tomography (CT). The development dataset was split into training, tuning, and internal test sets in a ratio of 7:1:2. The reference standard for skull fracture was cranial CT. Two radiology residents, a pediatric radiologist, and two emergency physicians participated in a two-session observer study on an external test set with and without AI assistance. We obtained the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity along with their 95% confidence intervals (CIs). Results The AI model showed an AUROC of 0.922 (95% CI, 0.842–0.969) in the internal test set and 0.870 (95% CI, 0.785–0.930) in the external test set. The model had a sensitivity of 81.1% (95% CI, 64.8%–92.0%) and specificity of 91.3% (95% CI, 79.2%–97.6%) for the internal test set and 78.9% (95% CI, 54.4%–93.9%) and 88.2% (95% CI, 78.7%–94.4%), respectively, for the external test set. With the model’s assistance, significant AUROC improvement was observed in radiology residents (pooled results) and emergency physicians (pooled results) with the difference from reading without AI assistance of 0.094 (95% CI, 0.020–0.168; p = 0.012) and 0.069 (95% CI, 0.002–0.136; p = 0.043), respectively, but not in the pediatric radiologist with the difference of 0.008 (95% CI, -0.074–0.090; p = 0.850). Conclusion A deep learning-based AI model improved the performance of inexperienced radiologists and emergency physicians in diagnosing pediatric skull fractures on plain radiographs.
Collapse
Affiliation(s)
- Jae Won Choi
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, Armed Forces Yangju Hospital, Yangju, Korea
| | - Yeon Jin Cho
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul, Korea.
| | - Ji Young Ha
- Department of Radiology, Gyeongsang National University Changwon Hospital, Changwon, Korea
| | - Yun Young Lee
- Department of Radiology, Chonnam National University Hospital, Gwangju, Korea
| | - Seok Young Koh
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - June Young Seo
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Young Hun Choi
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Jung-Eun Cheon
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Ji Hoon Phi
- Division of Pediatric Neurosurgery, Seoul National University Children's Hospital, Seoul, Korea
| | - Injoon Kim
- Department of Emergency Medicine, Armed Forces Yangju Hospital, Yangju, Korea
| | | | - Woo Sun Kim
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| |
Collapse
|
119
|
Linguraru MG, Maier-Hein L, Summers RM, Kahn CE. RSNA-MICCAI Panel Discussion: 2. Leveraging the Full Potential of AI-Radiologists and Data Scientists Working Together. Radiol Artif Intell 2021; 3:e210248. [PMID: 34870225 DOI: 10.1148/ryai.2021210248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 10/13/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
In March 2021, the Radiological Society of North America hosted a virtual panel discussion with members of the Medical Image Computing and Computer Assisted Intervention Society. Both organizations share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel addressed how radiologists and data scientists can collaborate to advance the science of AI in radiology. Keywords: Adults and Pediatrics, Segmentation, Feature Detection, Quantification, Diagnosis/Classification, Prognosis/Classification © RSNA, 2021.
Collapse
Affiliation(s)
- Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Lena Maier-Hein
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Ronald M Summers
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Charles E Kahn
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| |
Collapse
|
120
|
Rainey C, O'Regan T, Matthew J, Skelton E, Woznitza N, Chu KY, Goodman S, McConnell J, Hughes C, Bond R, McFadden S, Malamateniou C. Beauty Is in the AI of the Beholder: Are We Ready for the Clinical Integration of Artificial Intelligence in Radiography? An Exploratory Analysis of Perceived AI Knowledge, Skills, Confidence, and Education Perspectives of UK Radiographers. Front Digit Health 2021; 3:739327. [PMID: 34859245 PMCID: PMC8631824 DOI: 10.3389/fdgth.2021.739327] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 10/19/2021] [Indexed: 12/19/2022] Open
Abstract
Introduction: The use of artificial intelligence (AI) in medical imaging and radiotherapy has been met with both scepticism and excitement. However, clinical integration of AI is already well-underway. Many authors have recently reported on the AI knowledge and perceptions of radiologists/medical staff and students however there is a paucity of information regarding radiographers. Published literature agrees that AI is likely to have significant impact on radiology practice. As radiographers are at the forefront of radiology service delivery, an awareness of the current level of their perceived knowledge, skills, and confidence in AI is essential to identify any educational needs necessary for successful adoption into practice. Aim: The aim of this survey was to determine the perceived knowledge, skills, and confidence in AI amongst UK radiographers and highlight priorities for educational provisions to support a digital healthcare ecosystem. Methods: A survey was created on Qualtrics® and promoted via social media (Twitter®/LinkedIn®). This survey was open to all UK radiographers, including students and retired radiographers. Participants were recruited by convenience, snowball sampling. Demographic information was gathered as well as data on the perceived, self-reported, knowledge, skills, and confidence in AI of respondents. Insight into what the participants understand by the term “AI” was gained by means of a free text response. Quantitative analysis was performed using SPSS® and qualitative thematic analysis was performed on NVivo®. Results: Four hundred and eleven responses were collected (80% from diagnostic radiography and 20% from a radiotherapy background), broadly representative of the workforce distribution in the UK. Although many respondents stated that they understood the concept of AI in general (78.7% for diagnostic and 52.1% for therapeutic radiography respondents, respectively) there was a notable lack of sufficient knowledge of AI principles, understanding of AI terminology, skills, and confidence in the use of AI technology. Many participants, 57% of diagnostic and 49% radiotherapy respondents, do not feel adequately trained to implement AI in the clinical setting. Furthermore 52% and 64%, respectively, said they have not developed any skill in AI whilst 62% and 55%, respectively, stated that there is not enough AI training for radiographers. The majority of the respondents indicate that there is an urgent need for further education (77.4% of diagnostic and 73.9% of therapeutic radiographers feeling they have not had adequate training in AI), with many respondents stating that they had to educate themselves to gain some basic AI skills. Notable correlations between confidence in working with AI and gender, age, and highest qualification were reported. Conclusion: Knowledge of AI terminology, principles, and applications by healthcare practitioners is necessary for adoption and integration of AI applications. The results of this survey highlight the perceived lack of knowledge, skills, and confidence for radiographers in applying AI solutions but also underline the need for formalised education on AI to prepare the current and prospective workforce for the upcoming clinical integration of AI in healthcare, to safely and efficiently navigate a digital future. Focus should be given on different needs of learners depending on age, gender, and highest qualification to ensure optimal integration.
Collapse
Affiliation(s)
- Clare Rainey
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| | - Nick Woznitza
- University College London Hospitals, London, United Kingdom.,School of Allied and Public Health Professions, Canterbury Christ Church University, Canterbury, United Kingdom
| | - Kwun-Ye Chu
- Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom.,Radiotherapy Department, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, United Kingdom
| | - Spencer Goodman
- The Society and College of Radiographers, London, United Kingdom
| | | | - Ciara Hughes
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Raymond Bond
- Faculty of Computing, Engineering and the Built Environment, School of Computing, Ulster University, Newtownabbey, United Kingdom
| | - Sonyia McFadden
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Christina Malamateniou
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| |
Collapse
|
121
|
Filice RW, Kahn CE. Biomedical Ontologies to Guide AI Development in Radiology. J Digit Imaging 2021; 34:1331-1341. [PMID: 34724143 PMCID: PMC8669056 DOI: 10.1007/s10278-021-00527-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/27/2021] [Accepted: 10/13/2021] [Indexed: 10/25/2022] Open
Abstract
The advent of deep learning has engendered renewed and rapidly growing interest in artificial intelligence (AI) in radiology to analyze images, manipulate textual reports, and plan interventions. Applications of deep learning and other AI approaches must be guided by sound medical knowledge to assure that they are developed successfully and that they address important problems in biomedical research or patient care. To date, AI has been applied to a limited number of real-world radiology applications. As AI systems become more pervasive and are applied more broadly, they will benefit from medical knowledge on a larger scale, such as that available through computer-based approaches. A key approach to represent computer-based knowledge in a particular domain is an ontology. As defined in informatics, an ontology defines a domain's terms through their relationships with other terms in the ontology. Those relationships, then, define the terms' semantics, or "meaning." Biomedical ontologies commonly define the relationships between terms and more general terms, and can express causal, part-whole, and anatomic relationships. Ontologies express knowledge in a form that is both human-readable and machine-computable. Some ontologies, such as RSNA's RadLex radiology lexicon, have been applied to applications in clinical practice and research, and may be familiar to many radiologists. This article describes how ontologies can support research and guide emerging applications of AI in radiology, including natural language processing, image-based machine learning, radiomics, and planning.
Collapse
Affiliation(s)
- Ross W Filice
- Department of Radiology, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA.
| |
Collapse
|
122
|
McCrindle B, Zukotynski K, Doyle TE, Noseworthy MD. A Radiology-focused Review of Predictive Uncertainty for AI Interpretability in Computer-assisted Segmentation. Radiol Artif Intell 2021; 3:e210031. [PMID: 34870219 PMCID: PMC8637228 DOI: 10.1148/ryai.2021210031] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 08/09/2021] [Accepted: 08/25/2021] [Indexed: 11/11/2022]
Abstract
The recent advances and availability of computer hardware, software tools, and massive digital data archives have enabled the rapid development of artificial intelligence (AI) applications. Concerns over whether AI tools can "communicate" decisions to radiologists and primary care physicians is of particular importance because automated clinical decisions can substantially impact patient outcome. A challenge facing the clinical implementation of AI stems from the potential lack of trust clinicians have in these predictive models. This review will expand on the existing literature on interpretability methods for deep learning and review the state-of-the-art methods for predictive uncertainty estimation for computer-assisted segmentation tasks. Last, we discuss how uncertainty can improve predictive performance and model interpretability and can act as a tool to help foster trust. Keywords: Segmentation, Quantification, Ethics, Bayesian Network (BN) © RSNA, 2021.
Collapse
Affiliation(s)
- Brian McCrindle
- From the Department of Electrical and Computer Engineering (B.M., T.E.D., M.D.N.), Department of Radiology, Faculty of Health Sciences (K.Z., M.D.N.), and School of Biomedical Engineering (K.Z., T.E.D., M.D.N.), McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4L8; and Vector Institute for Artificial Intelligence, Toronto, Canada (T.E.D.)
| | - Katherine Zukotynski
- From the Department of Electrical and Computer Engineering (B.M., T.E.D., M.D.N.), Department of Radiology, Faculty of Health Sciences (K.Z., M.D.N.), and School of Biomedical Engineering (K.Z., T.E.D., M.D.N.), McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4L8; and Vector Institute for Artificial Intelligence, Toronto, Canada (T.E.D.)
| | - Thomas E. Doyle
- From the Department of Electrical and Computer Engineering (B.M., T.E.D., M.D.N.), Department of Radiology, Faculty of Health Sciences (K.Z., M.D.N.), and School of Biomedical Engineering (K.Z., T.E.D., M.D.N.), McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4L8; and Vector Institute for Artificial Intelligence, Toronto, Canada (T.E.D.)
| | - Michael D. Noseworthy
- From the Department of Electrical and Computer Engineering (B.M., T.E.D., M.D.N.), Department of Radiology, Faculty of Health Sciences (K.Z., M.D.N.), and School of Biomedical Engineering (K.Z., T.E.D., M.D.N.), McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4L8; and Vector Institute for Artificial Intelligence, Toronto, Canada (T.E.D.)
| |
Collapse
|
123
|
Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 2021; 3:e745-e750. [PMID: 34711379 DOI: 10.1016/s2589-7500(21)00208-9] [Citation(s) in RCA: 270] [Impact Index Per Article: 90.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 05/25/2021] [Accepted: 08/17/2021] [Indexed: 11/16/2022]
Abstract
The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
Collapse
Affiliation(s)
- Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science and Institute for Medical and Evaluative Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Vector Institute, Toronto, ON, Canada
| | - Luke Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Andrew L Beam
- CAUSALab and Department of Epidemiology, Harvard T H Chan School of Public Health, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
124
|
Kalpathy-Cramer J, Patel JB, Bridge C, Chang K. Basic Artificial Intelligence Techniques: Evaluation of Artificial Intelligence Performance. Radiol Clin North Am 2021; 59:941-954. [PMID: 34689879 DOI: 10.1016/j.rcl.2021.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Jayashree Kalpathy-Cramer
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA.
| | - Jay B Patel
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| | - Christopher Bridge
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| | - Ken Chang
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| |
Collapse
|
125
|
Mahapatra D, Poellinger A, Shao L, Reyes M. Interpretability-Driven Sample Selection Using Self Supervised Learning for Disease Classification and Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2548-2562. [PMID: 33625979 DOI: 10.1109/tmi.2021.3061724] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In supervised learning for medical image analysis, sample selection methodologies are fundamental to attain optimum system performance promptly and with minimal expert interactions (e.g. label querying in an active learning setup). In this article we propose a novel sample selection methodology based on deep features leveraging information contained in interpretability saliency maps. In the absence of ground truth labels for informative samples, we use a novel self supervised learning based approach for training a classifier that learns to identify the most informative sample in a given batch of images. We demonstrate the benefits of the proposed approach, termed Interpretability-Driven Sample Selection (IDEAL), in an active learning setup aimed at lung disease classification and histopathology image segmentation. We analyze three different approaches to determine sample informativeness from interpretability saliency maps: (i) an observational model stemming from findings on previous uncertainty-based sample selection approaches, (ii) a radiomics-based model, and (iii) a novel data-driven self-supervised approach. We compare IDEAL to other baselines using the publicly available NIH chest X-ray dataset for lung disease classification, and a public histopathology segmentation dataset (GLaS), demonstrating the potential of using interpretability information for sample selection in active learning systems. Results show our proposed self supervised approach outperforms other approaches in selecting informative samples leading to state of the art performance with fewer samples.
Collapse
|
126
|
Petch J, Di S, Nelson W. Opening the black box: the promise and limitations of explainable machine learning in cardiology. Can J Cardiol 2021; 38:204-213. [PMID: 34534619 DOI: 10.1016/j.cjca.2021.09.004] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 08/23/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022] Open
Abstract
Many clinicians remain wary of machine learning due to long-standing concerns about "black box" models. "Black box" is shorthand for models that are sufficiently complex that they are not straightforwardly interpretable to humans. Lack of interpretability in predictive models can undermine trust in those models, especially in health care where so many decisions are literally life and death. There has recently been an explosion of research in the field of explainable machine learning aimed at addressing these concerns. The promise of explainable machine learning is considerable, but it is important for cardiologists who may encounter these techniques in clinical decision support tools or novel research papers to have a critical understanding of both their strengths and their limitations. This paper reviews key concepts and techniques in the field of explainable machine learning as they apply to cardiology. Key concepts reviewed include interpretability versus explainability and global versus local explanations. Techniques demonstrated include permutation importance, surrogate decision trees, local interpretable model-agnostic explanations, and partial dependence plots. We discuss several limitations with explainability techniques, focusing on the how the nature of explanations as approximations may omit important information about how black box models work and why they make certain predictions. We conclude by proposing a rule of thumb about when it is appropriate to use black box models with explanations, rather than interpretable models.
Collapse
Affiliation(s)
- Jeremy Petch
- Centre for Data Science and Digital Health, Hamilton Health Sciences; Institute of Health Policy, Management and Evaluation, University of Toronto; Division of Cardiology, Department of Medicine, McMaster University; Population Health Research Institute.
| | - Shuang Di
- Centre for Data Science and Digital Health, Hamilton Health Sciences; Dalla Lana School of Public Health, University of Toronto
| | - Walter Nelson
- Centre for Data Science and Digital Health, Hamilton Health Sciences; Department of Statistical Sciences, University of Toronto
| |
Collapse
|
127
|
Hanif AM, Beqiri S, Keane PA, Campbell JP. Applications of interpretability in deep learning models for ophthalmology. Curr Opin Ophthalmol 2021; 32:452-458. [PMID: 34231530 PMCID: PMC8373813 DOI: 10.1097/icu.0000000000000780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. RECENT FINDINGS The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models. SUMMARY Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.
Collapse
Affiliation(s)
- Adam M. Hanif
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Sara Beqiri
- University College London Division of Medicine, London, United Kingdom
| | - Pearse A. Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- University College London Institute of Ophthalmology, United Kingdom
| | | |
Collapse
|
128
|
Mahmood U, Shrestha R, Bates DDB, Mannelli L, Corrias G, Erdi YE, Kanan C. Detecting Spurious Correlations With Sanity Tests for Artificial Intelligence Guided Radiology Systems. Front Digit Health 2021; 3:671015. [PMID: 34713144 PMCID: PMC8521929 DOI: 10.3389/fdgth.2021.671015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 06/29/2021] [Indexed: 11/23/2022] Open
Abstract
Artificial intelligence (AI) has been successful at solving numerous problems in machine perception. In radiology, AI systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists' efficiency. A critical component to deploying AI in radiology is to gain confidence in a developed system's efficacy and safety. The current gold standard approach is to conduct an analytical validation of performance on a generalization dataset from one or more institutions, followed by a clinical validation study of the system's efficacy during deployment. Clinical validation studies are time-consuming, and best practices dictate limited re-use of analytical validation data, so it is ideal to know ahead of time if a system is likely to fail analytical or clinical validation. In this paper, we describe a series of sanity tests to identify when a system performs well on development data for the wrong reasons. We illustrate the sanity tests' value by designing a deep learning system to classify pancreatic cancer seen in computed tomography scans.
Collapse
Affiliation(s)
- Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Robik Shrestha
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States
| | - David D. B. Bates
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Lorenzo Mannelli
- Institute of Research and Medical Care (IRCCS) SDN, Institute of Diagnostic and Nuclear Research, Naples, Italy
| | - Giuseppe Corrias
- Department of Radiology, University of Cagliari, Cagliari, Italy
| | - Yusuf Emre Erdi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States
| |
Collapse
|
129
|
Alelyani M, Alamri S, Alqahtani MS, Musa A, Almater H, Alqahtani N, Alshahrani F, Alelyani S. Radiology Community Attitude in Saudi Arabia about the Applications of Artificial Intelligence in Radiology. Healthcare (Basel) 2021; 9:healthcare9070834. [PMID: 34356212 PMCID: PMC8307220 DOI: 10.3390/healthcare9070834] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 06/13/2021] [Accepted: 06/26/2021] [Indexed: 12/18/2022] Open
Abstract
Artificial intelligence (AI) is a broad, umbrella term that encompasses the theory and development of computer systems able to perform tasks normally requiring human intelligence. The aim of this study is to assess the radiology community’s attitude in Saudi Arabia toward the applications of AI. Methods: Data for this study were collected using electronic questionnaires in 2019 and 2020. The study included a total of 714 participants. Data analysis was performed using SPSS Statistics (version 25). Results: The majority of the participants (61.2%) had read or heard about the role of AI in radiology. We also found that radiologists had statistically different responses and tended to read more about AI compared to all other specialists. In addition, 82% of the participants thought that AI must be included in the curriculum of medical and allied health colleges, and 86% of the participants agreed that AI would be essential in the future. Even though human–machine interaction was considered to be one of the most important skills in the future, 89% of the participants thought that it would never replace radiologists. Conclusion: Because AI plays a vital role in radiology, it is important to ensure that radiologists and radiographers have at least a minimum understanding of the technology. Our finding shows an acceptable level of knowledge regarding AI technology and that AI applications should be included in the curriculum of the medical and health sciences colleges.
Collapse
Affiliation(s)
- Magbool Alelyani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
- Correspondence:
| | - Sultan Alamri
- Department Radiological Sciences, Taif University, Taif 21944, Saudi Arabia;
| | - Mohammed S. Alqahtani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Alamin Musa
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Hajar Almater
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Nada Alqahtani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Fay Alshahrani
- Department Radiological Sciences, King Khalid University, Abha 61421, Saudi Arabia; (M.S.A.); (A.M.); (H.A.); (N.A.); (F.A.)
| | - Salem Alelyani
- Center for Artificial Intelligence (CAI), King Khalid University, Abha 61421, Saudi Arabia;
- College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| |
Collapse
|
130
|
Li MD, Torriani M. Radiologist-level Scaphoid Fracture Detection: Next Steps for Clinical Application. Radiol Artif Intell 2021; 3:e210111. [PMID: 34350417 PMCID: PMC8328100 DOI: 10.1148/ryai.2021210111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 04/29/2021] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Matthew D. Li
- From the Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Yawkey 6E, Boston, MA 02114
| | - Martin Torriani
- From the Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Yawkey 6E, Boston, MA 02114
| |
Collapse
|
131
|
Popa SL, Ismaiel A, Cristina P, Cristina M, Chiarioni G, David L, Dumitrascu DL. Non-Alcoholic Fatty Liver Disease: Implementing Complete Automated Diagnosis and Staging. A Systematic Review. Diagnostics (Basel) 2021; 11:diagnostics11061078. [PMID: 34204822 PMCID: PMC8231502 DOI: 10.3390/diagnostics11061078] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 06/05/2021] [Accepted: 06/10/2021] [Indexed: 12/12/2022] Open
Abstract
Background: Non-alcoholic fatty liver disease (NAFLD) is a fast-growing pathology around the world, being considered the most common chronic liver disease. It is diagnosed based on the presence of steatosis in more than 5% of hepatocytes without significant alcohol consumption. This review aims to provide a comprehensive overview of current studies of artificial intelligence (AI) applications that may help physicians in implementing a complete automated NAFLD diagnosis and staging. Methods: PubMed, EMBASE, Cochrane Library, and WILEY databases were screened for relevant publications in relation to AI applications in NAFLD. The search terms included: (non-alcoholic fatty liver disease OR NAFLD) AND (artificial intelligence OR machine learning OR neural networks OR deep learning OR automated diagnosis OR computer-aided diagnosis OR digital pathology OR automated ultrasound OR automated computer tomography OR automated magnetic imaging OR electronic health records). Results: Our search identified 37 articles about automated NAFLD diagnosis, out of which 15 articles analyzed imagistic techniques, 15 articles analyzed digital pathology, and 7 articles analyzed electronic health records (EHC). All studies included in this review show an accurate capacity of automated diagnosis and staging in NAFLD using AI-based software. Conclusions: We found significant evidence demonstrating that implementing a complete automated system for NAFLD diagnosis, staging, and risk stratification is currently possible, considering the accuracy, sensibility, and specificity of available AI-based tools.
Collapse
Affiliation(s)
- Stefan L. Popa
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (S.L.P.); (L.D.); (D.L.D.)
| | - Abdulrahman Ismaiel
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (S.L.P.); (L.D.); (D.L.D.)
- Correspondence:
| | - Pop Cristina
- Department of Pharmacology, Physiology and Pathophysiology, Faculty of Pharmacy, “Iuliu Hațieganu” University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania; (P.C.); (M.C.)
| | - Mogosan Cristina
- Department of Pharmacology, Physiology and Pathophysiology, Faculty of Pharmacy, “Iuliu Hațieganu” University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania; (P.C.); (M.C.)
| | - Giuseppe Chiarioni
- Division of Gastroenterology, University of Verona, 1-37126 AOUI Verona, Italy;
| | - Liliana David
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (S.L.P.); (L.D.); (D.L.D.)
| | - Dan L. Dumitrascu
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania; (S.L.P.); (L.D.); (D.L.D.)
| |
Collapse
|
132
|
Hendrix N, Hauber B, Lee CI, Bansal A, Veenstra DL. Artificial intelligence in breast cancer screening: primary care provider preferences. J Am Med Inform Assoc 2021; 28:1117-1124. [PMID: 33367670 PMCID: PMC8200265 DOI: 10.1093/jamia/ocaa292] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 10/05/2020] [Accepted: 11/10/2020] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly being proposed for use in medicine, including breast cancer screening (BCS). Little is known, however, about referring primary care providers' (PCPs') preferences for this technology. METHODS We identified the most important attributes of AI BCS for ordering PCPs using qualitative interviews: sensitivity, specificity, radiologist involvement, understandability of AI decision-making, supporting evidence, and diversity of training data. We invited US-based PCPs to participate in an internet-based experiment designed to force participants to trade off among the attributes of hypothetical AI BCS products. Responses were analyzed with random parameters logit and latent class models to assess how different attributes affect the choice to recommend AI-enhanced screening. RESULTS Ninety-one PCPs participated. Sensitivity was most important, and most PCPs viewed radiologist participation in mammography interpretation as important. Other important attributes were specificity, understandability of AI decision-making, and diversity of data. We identified 3 classes of respondents: "Sensitivity First" (41%) found sensitivity to be more than twice as important as other attributes; "Against AI Autonomy" (24%) wanted radiologists to confirm every image; "Uncertain Trade-Offs" (35%) viewed most attributes as having similar importance. A majority (76%) accepted the use of AI in a "triage" role that would allow it to filter out likely negatives without radiologist confirmation. CONCLUSIONS AND RELEVANCE Sensitivity was the most important attribute overall, but other key attributes should be addressed to produce clinically acceptable products. We also found that most PCPs accept the use of AI to make determinations about likely negative mammograms without radiologist confirmation.
Collapse
Affiliation(s)
- Nathaniel Hendrix
- The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington School of Pharmacy, Seattle, Washington, USA
| | - Brett Hauber
- The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington School of Pharmacy, Seattle, Washington, USA
- RTI Health Solutions, Research Triangle Park, North Carolina, USA
| | - Christoph I Lee
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA
- Department of Health Services, University of Washington School of Public Health, Seattle, Washington, USA
- Hutchinson Institute for Cancer Outcomes Research, Seattle, Washington, USA
| | - Aasthaa Bansal
- The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington School of Pharmacy, Seattle, Washington, USA
| | - David L Veenstra
- The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington School of Pharmacy, Seattle, Washington, USA
| |
Collapse
|
133
|
Born J, Beymer D, Rajan D, Coy A, Mukherjee VV, Manica M, Prasanna P, Ballah D, Guindy M, Shaham D, Shah PL, Karteris E, Robertus JL, Gabrani M, Rosen-Zvi M. On the role of artificial intelligence in medical imaging of COVID-19. PATTERNS (NEW YORK, N.Y.) 2021; 2:100269. [PMID: 33969323 PMCID: PMC8086827 DOI: 10.1016/j.patter.2021.100269] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Although a plethora of research articles on AI methods on COVID-19 medical imaging are published, their clinical value remains unclear. We conducted the largest systematic review of the literature addressing the utility of AI in imaging for COVID-19 patient care. By keyword searches on PubMed and preprint servers throughout 2020, we identified 463 manuscripts and performed a systematic meta-analysis to assess their technical merit and clinical relevance. Our analysis evidences a significant disparity between clinical and AI communities, in the focus on both imaging modalities (AI experts neglected CT and ultrasound, favoring X-ray) and performed tasks (71.9% of AI papers centered on diagnosis). The vast majority of manuscripts were found to be deficient regarding potential use in clinical practice, but 2.7% (n = 12) publications were assigned a high maturity level and are summarized in greater detail. We provide an itemized discussion of the challenges in developing clinically relevant AI solutions with recommendations and remedies.
Collapse
Affiliation(s)
- Jannis Born
- IBM Research Europe, Zurich, Switzerland
- Department for Biosystems Science & Engineering, ETH Zurich, Zurich, Switzerland
| | | | | | - Adam Coy
- IBM Almaden Research Center, San Jose, CA, USA
- Vision Radiology, Dallas, TX, USA
| | | | | | - Prasanth Prasanna
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology and Imaging Sciences, University of Utah Health Sciences Center, Salt Lake City, UT, USA
| | - Deddeh Ballah
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology, Seton Medical Center, Daly City, CA, USA
| | - Michal Guindy
- Assuta Medical Centres Radiology, Tel-Aviv, Israel
- Ben-Gurion University Medical School, Be'er Sheva, Israel
| | - Dorith Shaham
- Department of Radiology, Hadassah-Hebrew University Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Pallav L. Shah
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Chelsea & Westminster Hospital, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | - Emmanouil Karteris
- College of Health, Medicine and Life Sciences, Brunel University London, London, UK
| | - Jan L. Robertus
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | | | - Michal Rosen-Zvi
- IBM Research Haifa, Haifa, Israel
- Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
134
|
Baker S, Xiang W, Atkinson I. Hybridized neural networks for non-invasive and continuous mortality risk assessment in neonates. Comput Biol Med 2021; 134:104521. [PMID: 34111664 DOI: 10.1016/j.compbiomed.2021.104521] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 11/19/2022]
Abstract
Premature birth is the primary risk factor in neonatal deaths, with the majority of extremely premature babies cared for in neonatal intensive care units (NICUs). Mortality risk prediction in this setting can greatly improve patient outcomes and resource utilization. However, existing schemes often require laborious medical testing and calculation, and are typically only calculated once at admission. In this work, we propose a shallow hybrid neural network for the prediction of mortality risk in 3-day, 7-day, and 14-day risk windows using only birthweight, gestational age, sex, and heart rate (HR) and respiratory rate (RR) information from a 12-h window. As such, this scheme is capable of continuously updating mortality risk assessment, enabling analysis of health trends and responses to treatment. The highest performing scheme was the network that considered mortality risk within 3 days, with this scheme outperforming state-of-the-art works in the literature and achieving an area under the receiver-operator curve (AUROC) of 0.9336 with standard deviation of 0.0337 across 5 folds of cross-validation. As such, we conclude that our proposed scheme could readily be used for continuously-updating mortality risk prediction in NICU environments.
Collapse
Affiliation(s)
- Stephanie Baker
- College of Science & Engineering, James Cook University, Cairns, Queensland, 4878, Australia.
| | - Wei Xiang
- School of Engineering and Mathematical Sciences, La Trobe University, Melbourne, Victoria, 3086, Australia
| | - Ian Atkinson
- eResearch Centre, James Cook University, Townsville, Queensland, 4811, Australia
| |
Collapse
|
135
|
Wang ZJ. Probing an AI regression model for hand bone age determination using gradient-based saliency mapping. Sci Rep 2021; 11:10610. [PMID: 34012111 PMCID: PMC8134559 DOI: 10.1038/s41598-021-90157-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/04/2021] [Indexed: 11/21/2022] Open
Abstract
Understanding how a neural network makes decisions holds significant value for users. For this reason, gradient-based saliency mapping was tested on an artificial intelligence (AI) regression model for determining hand bone age from X-ray radiographs. The partial derivative (PD) of the inferred age with respect to input image intensity at each pixel served as a saliency marker to find sensitive areas contributing to the outcome. The mean of the absolute PD values was calculated for five anatomical regions of interest, and one hundred test images were evaluated with this procedure. The PD maps suggested that the AI model employed a holistic approach in determining hand bone age, with the wrist area being the most important at early ages. However, this importance decreased with increasing age. The middle section of the metacarpal bones was the least important area for bone age determination. The muscular region between the first and second metacarpal bones also exhibited high PD values but contained no bone age information, suggesting a region of vulnerability in age determination. An end-to-end gradient-based saliency map can be obtained from a black box regression AI model and provide insight into how the model makes decisions.
Collapse
Affiliation(s)
- Zhiyue J Wang
- Department of Radiology, Children's Health and University of Texas Southwestern Medical Center, 1935 Medical District Drive, F1-02, Dallas, TX, 75235, USA.
| |
Collapse
|
136
|
Alhasan M, Hasaneen M. Digital imaging, technologies and artificial intelligence applications during COVID-19 pandemic. Comput Med Imaging Graph 2021; 91:101933. [PMID: 34082281 PMCID: PMC8123377 DOI: 10.1016/j.compmedimag.2021.101933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/15/2021] [Accepted: 04/27/2021] [Indexed: 12/13/2022]
Abstract
The advancement of technology remained an immersive interest for humankind throughout the past decades. Tech enterprises offered a stream of innovation to address the universal healthcare concerns. The novel coronavirus holds a substantial foothold of planet earth which is combatted by digital interventions across afflicted geographical boundaries and territories. This study aims to explore the trends of modern healthcare technologies and Artificial Intelligence (AI) during COVID-19 crisis, define the concepts and clinical role of AI in the mitigation of COVID-19, investigate and correlate the efficacy of AI-enabled technology in medical imaging during COVID-19 and determine advantages, drawbacks, and challenges of artificial intelligence during COVID-19 pandemic. The paper applied systematic review approach using a deliberated research protocol and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart. Digital technologies can coordinate COVID-19 responses in a cascade fashion that extends from the clinical care facility to the exterior of the pending viral epicenter. With cases of healthcare robotics, aerial drones, and the internet of things as evidentiary examples. PCR tests and medical imaging are the frontier diagnostics of COVID-19. Computed tomography helped to correct the accuracy variation of PCR tests at a clinical sensitivity of 98 %. Artificial intelligence can enable autonomous COVID-19 responses using techniques like machine learning. Technology could be an endless system of innovation and opportunities when sourced effectively. Scientists can utilize technology to resolve global concerns challenging the history of tangible possibility. Digital interventions have enhanced the responses to COVID-19, magnified the role of medical imaging amid the COVID-19 crisis and have exposed healthcare professionals to the opportunity of contactless care.
Collapse
Affiliation(s)
- Mustafa Alhasan
- Radiography and Medical Imaging Department, Fatima College of Health Sciences, United Arab Emirates; Radiologic Technology Program, Applied Medical Sciences College, Jordan University of Science and Technology, Jordan.
| | - Mohamed Hasaneen
- Radiography and Medical Imaging Department, Fatima College of Health Sciences, United Arab Emirates.
| |
Collapse
|
137
|
Nowak S, Mesropyan N, Faron A, Block W, Reuter M, Attenberger UI, Luetkens JA, Sprinkart AM. Detection of liver cirrhosis in standard T2-weighted MRI using deep transfer learning. Eur Radiol 2021; 31:8807-8815. [PMID: 33974149 PMCID: PMC8523404 DOI: 10.1007/s00330-021-07858-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 02/12/2021] [Accepted: 03/10/2021] [Indexed: 12/17/2022]
Abstract
Objectives To investigate the diagnostic performance of deep transfer learning (DTL) to detect liver cirrhosis from clinical MRI. Methods The dataset for this retrospective analysis consisted of 713 (343 female) patients who underwent liver MRI between 2017 and 2019. In total, 553 of these subjects had a confirmed diagnosis of liver cirrhosis, while the remainder had no history of liver disease. T2-weighted MRI slices at the level of the caudate lobe were manually exported for DTL analysis. Data were randomly split into training, validation, and test sets (70%/15%/15%). A ResNet50 convolutional neural network (CNN) pre-trained on the ImageNet archive was used for cirrhosis detection with and without upstream liver segmentation. Classification performance for detection of liver cirrhosis was compared to two radiologists with different levels of experience (4th-year resident, board-certified radiologist). Segmentation was performed using a U-Net architecture built on a pre-trained ResNet34 encoder. Differences in classification accuracy were assessed by the χ2-test. Results Dice coefficients for automatic segmentation were above 0.98 for both validation and test data. The classification accuracy of liver cirrhosis on validation (vACC) and test (tACC) data for the DTL pipeline with upstream liver segmentation (vACC = 0.99, tACC = 0.96) was significantly higher compared to the resident (vACC = 0.88, p < 0.01; tACC = 0.91, p = 0.01) and to the board-certified radiologist (vACC = 0.96, p < 0.01; tACC = 0.90, p < 0.01). Conclusion This proof-of-principle study demonstrates the potential of DTL for detecting cirrhosis based on standard T2-weighted MRI. The presented method for image-based diagnosis of liver cirrhosis demonstrated expert-level classification accuracy. Key Points • A pipeline consisting of two convolutional neural networks (CNNs) pre-trained on an extensive natural image database (ImageNet archive) enables detection of liver cirrhosis on standard T2-weighted MRI. • High classification accuracy can be achieved even without altering the pre-trained parameters of the convolutional neural networks. • Other abdominal structures apart from the liver were relevant for detection when the network was trained on unsegmented images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07858-1.
Collapse
Affiliation(s)
- Sebastian Nowak
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Narine Mesropyan
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Anton Faron
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Wolfgang Block
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Martin Reuter
- Image Analysis, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA.,Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Ulrike I Attenberger
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Alois M Sprinkart
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany.
| |
Collapse
|
138
|
Caspers J. Translation of predictive modeling and AI into clinics: a question of trust. Eur Radiol 2021; 31:4947-4948. [PMID: 33895859 PMCID: PMC8213549 DOI: 10.1007/s00330-021-07977-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/03/2021] [Accepted: 04/01/2021] [Indexed: 11/26/2022]
Affiliation(s)
- Julian Caspers
- University Düsseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, D-40225, Düsseldorf, Germany.
| |
Collapse
|
139
|
Dwivedi K, Sharkey M, Condliffe R, Uthoff JM, Alabed S, Metherall P, Lu H, Wild JM, Hoffman EA, Swift AJ, Kiely DG. Pulmonary Hypertension in Association with Lung Disease: Quantitative CT and Artificial Intelligence to the Rescue? State-of-the-Art Review. Diagnostics (Basel) 2021; 11:diagnostics11040679. [PMID: 33918838 PMCID: PMC8070579 DOI: 10.3390/diagnostics11040679] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/05/2021] [Accepted: 04/05/2021] [Indexed: 12/24/2022] Open
Abstract
Accurate phenotyping of patients with pulmonary hypertension (PH) is an integral part of informing disease classification, treatment, and prognosis. The impact of lung disease on PH outcomes and response to treatment remains a challenging area with limited progress. Imaging with computed tomography (CT) plays an important role in patients with suspected PH when assessing for parenchymal lung disease, however, current assessments are limited by their semi-qualitative nature. Quantitative chest-CT (QCT) allows numerical quantification of lung parenchymal disease beyond subjective visual assessment. This has facilitated advances in radiological assessment and clinical correlation of a range of lung diseases including emphysema, interstitial lung disease, and coronavirus disease 2019 (COVID-19). Artificial Intelligence approaches have the potential to facilitate rapid quantitative assessments. Benefits of cross-sectional imaging include ease and speed of scan acquisition, repeatability and the potential for novel insights beyond visual assessment alone. Potential clinical benefits include improved phenotyping and prediction of treatment response and survival. Artificial intelligence approaches also have the potential to aid more focused study of pulmonary arterial hypertension (PAH) therapies by identifying more homogeneous subgroups of patients with lung disease. This state-of-the-art review summarizes recent QCT developments and potential applications in patients with PH with a focus on lung disease.
Collapse
Affiliation(s)
- Krit Dwivedi
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Correspondence:
| | - Michael Sharkey
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Robin Condliffe
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Sheffield Pulmonary Vascular Disease Unit, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Johanna M. Uthoff
- Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK; (J.M.U.); (H.L.)
| | - Samer Alabed
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
| | - Peter Metherall
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Haiping Lu
- Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK; (J.M.U.); (H.L.)
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - Jim M. Wild
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - Eric A. Hoffman
- Advanced Pulmonary Physiomic Imaging Laboratory, University of Iowa, C748 GH, Iowa City, IA 52242, USA;
| | - Andrew J. Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - David G. Kiely
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Sheffield Pulmonary Vascular Disease Unit, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| |
Collapse
|
140
|
Signoroni A, Savardi M, Benini S, Adami N, Leonardi R, Gibellini P, Vaccher F, Ravanelli M, Borghesi A, Maroldi R, Farina D. BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset. Med Image Anal 2021; 71:102046. [PMID: 33862337 PMCID: PMC8010334 DOI: 10.1016/j.media.2021.102046] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 02/04/2021] [Accepted: 03/17/2021] [Indexed: 12/22/2022]
Abstract
In this work we design an end-to-end deep learning architecture for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients. Such semi-quantitative scoring system, namely Brixia score, is applied in serial monitoring of such patients, showing significant prognostic value, in one of the hospitals that experienced one of the highest pandemic peaks in Italy. To solve such a challenging visual task, we adopt a weakly supervised learning strategy structured to handle different tasks (segmentation, spatial alignment, and score estimation) trained with a “from-the-part-to-the-whole” procedure involving different datasets. In particular, we exploit a clinical dataset of almost 5,000 CXR annotated images collected in the same hospital. Our BS-Net demonstrates self-attentive behavior and a high degree of accuracy in all processing stages. Through inter-rater agreement tests and a gold standard comparison, we show that our solution outperforms single human annotators in rating accuracy and consistency, thus supporting the possibility of using this tool in contexts of computer-assisted monitoring. Highly resolved (super-pixel level) explainability maps are also generated, with an original technique, to visually help the understanding of the network activity on the lung areas. We also consider other scores proposed in literature and provide a comparison with a recently proposed non-specific approach. We eventually test the performance robustness of our model on an assorted public COVID-19 dataset, for which we also provide Brixia score annotations, observing good direct generalization and fine-tuning capabilities that highlight the portability of BS-Net in other clinical settings. The CXR dataset along with the source code and the trained model are publicly released for research purposes.
Collapse
Affiliation(s)
- Alberto Signoroni
- Department of Information Engineering, University of Brescia, Brescia, Italy.
| | - Mattia Savardi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Sergio Benini
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Nicola Adami
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Riccardo Leonardi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Paolo Gibellini
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Filippo Vaccher
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Marco Ravanelli
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Andrea Borghesi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Roberto Maroldi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Davide Farina
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| |
Collapse
|
141
|
Knop M, Weber S, Mueller M, Niehaves B. Human Factors and Technological Characteristics Influencing the Interaction with AI-enabled Clinical Decision Support Systems: A Literature Review (Preprint). JMIR Hum Factors 2021; 9:e28639. [PMID: 35323118 PMCID: PMC8990344 DOI: 10.2196/28639] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 06/02/2021] [Accepted: 02/07/2022] [Indexed: 01/22/2023] Open
Abstract
Background The digitization and automation of diagnostics and treatments promise to alter the quality of health care and improve patient outcomes, whereas the undersupply of medical personnel, high workload on medical professionals, and medical case complexity increase. Clinical decision support systems (CDSSs) have been proven to help medical professionals in their everyday work through their ability to process vast amounts of patient information. However, comprehensive adoption is partially disrupted by specific technological and personal characteristics. With the rise of artificial intelligence (AI), CDSSs have become an adaptive technology with human-like capabilities and are able to learn and change their characteristics over time. However, research has not reflected on the characteristics and factors essential for effective collaboration between human actors and AI-enabled CDSSs. Objective Our study aims to summarize the factors influencing effective collaboration between medical professionals and AI-enabled CDSSs. These factors are essential for medical professionals, management, and technology designers to reflect on the adoption, implementation, and development of an AI-enabled CDSS. Methods We conducted a literature review including 3 different meta-databases, screening over 1000 articles and including 101 articles for full-text assessment. Of the 101 articles, 7 (6.9%) met our inclusion criteria and were analyzed for our synthesis. Results We identified the technological characteristics and human factors that appear to have an essential effect on the collaboration of medical professionals and AI-enabled CDSSs in accordance with our research objective, namely, training data quality, performance, explainability, adaptability, medical expertise, technological expertise, personality, cognitive biases, and trust. Comparing our results with those from research on non-AI CDSSs, some characteristics and factors retain their importance, whereas others gain or lose relevance owing to the uniqueness of human-AI interactions. However, only a few (1/7, 14%) studies have mentioned the theoretical foundations and patient outcomes related to AI-enabled CDSSs. Conclusions Our study provides a comprehensive overview of the relevant characteristics and factors that influence the interaction and collaboration between medical professionals and AI-enabled CDSSs. Rather limited theoretical foundations currently hinder the possibility of creating adequate concepts and models to explain and predict the interrelations between these characteristics and factors. For an appropriate evaluation of the human-AI collaboration, patient outcomes and the role of patients in the decision-making process should be considered.
Collapse
Affiliation(s)
- Michael Knop
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Sebastian Weber
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Marius Mueller
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Bjoern Niehaves
- Department of Information Systems, University of Siegen, Siegen, Germany
| |
Collapse
|
142
|
Trustworthiness of Artificial Intelligence Models in Radiology and the Role of Explainability. J Am Coll Radiol 2021; 18:1160-1162. [PMID: 33676912 DOI: 10.1016/j.jacr.2021.02.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 02/03/2021] [Accepted: 02/08/2021] [Indexed: 11/24/2022]
|
143
|
Omoumi P, Ducarouge A, Tournier A, Harvey H, Kahn CE, Louvet-de Verchère F, Pinto Dos Santos D, Kober T, Richiardi J. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). Eur Radiol 2021; 31:3786-3796. [PMID: 33666696 PMCID: PMC8128726 DOI: 10.1007/s00330-020-07684-x] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/09/2020] [Accepted: 12/29/2020] [Indexed: 02/07/2023]
Abstract
Abstract Artificial intelligence (AI) has made impressive progress over the past few years, including many applications in medical imaging. Numerous commercial solutions based on AI techniques are now available for sale, forcing radiology practices to learn how to properly assess these tools. While several guidelines describing good practices for conducting and reporting AI-based research in medicine and radiology have been published, fewer efforts have focused on recommendations addressing the key questions to consider when critically assessing AI solutions before purchase. Commercial AI solutions are typically complicated software products, for the evaluation of which many factors are to be considered. In this work, authors from academia and industry have joined efforts to propose a practical framework that will help stakeholders evaluate commercial AI solutions in radiology (the ECLAIR guidelines) and reach an informed decision. Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services. Key Points • Numerous commercial solutions based on artificial intelligence techniques are now available for sale, and radiology practices have to learn how to properly assess these tools. • We propose a framework focusing on practical points to consider when assessing an AI solution in medical imaging, allowing all stakeholders to conduct relevant discussions with manufacturers and reach an informed decision as to whether to purchase an AI commercial solution for imaging applications. • Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-020-07684-x.
Collapse
Affiliation(s)
- Patrick Omoumi
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Rue du Bugnon 46, 1011, Lausanne, Switzerland.
| | | | | | | | - Charles E Kahn
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Tobias Kober
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Jonas Richiardi
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Rue du Bugnon 46, 1011, Lausanne, Switzerland
| |
Collapse
|
144
|
McCarthy N, Dahlan A, Cook TS, Hare NO, Ryan ML, St John B, Lawlor A, Curran KM. Enterprise imaging and big data: A review from a medical physics perspective. Phys Med 2021; 83:206-220. [DOI: 10.1016/j.ejmp.2021.04.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 03/24/2021] [Accepted: 04/06/2021] [Indexed: 02/04/2023] Open
|
145
|
Cadrin-Chênevert A. Toward a More Quantitative and Specific Representation of Normality. Radiol Artif Intell 2021; 3:e210005. [PMID: 33939776 PMCID: PMC8035574 DOI: 10.1148/ryai.2021210005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 01/08/2021] [Accepted: 01/11/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Alexandre Cadrin-Chênevert
- From the Department of Medical Imaging, CISSS Lanaudière, affiliated with Laval University, 1000 Blvd St-Anne, Saint-Charles-Borromée, QC, Canada J6E 6J2
| |
Collapse
|
146
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
147
|
Ankenbrand MJ, Shainberg L, Hock M, Lohr D, Schreiber LM. Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI. BMC Med Imaging 2021; 21:27. [PMID: 33588786 PMCID: PMC7885570 DOI: 10.1186/s12880-021-00551-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/24/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. RESULTS We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. CONCLUSIONS Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.
Collapse
Affiliation(s)
- Markus J Ankenbrand
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany.
| | - Liliia Shainberg
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - Michael Hock
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - David Lohr
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - Laura M Schreiber
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| |
Collapse
|
148
|
Booth TC, Thompson G, Bulbeck H, Boele F, Buckley C, Cardoso J, Dos Santos Canas L, Jenkinson D, Ashkan K, Kreindler J, Huskens N, Luis A, McBain C, Mills SJ, Modat M, Morley N, Murphy C, Ourselin S, Pennington M, Powell J, Summers D, Waldman AD, Watts C, Williams M, Grant R, Jenkinson MD. A Position Statement on the Utility of Interval Imaging in Standard of Care Brain Tumour Management: Defining the Evidence Gap and Opportunities for Future Research. Front Oncol 2021; 11:620070. [PMID: 33634034 PMCID: PMC7900557 DOI: 10.3389/fonc.2021.620070] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Accepted: 01/06/2021] [Indexed: 12/19/2022] Open
Abstract
OBJECTIV E To summarise current evidence for the utility of interval imaging in monitoring disease in adult brain tumours, and to develop a position for future evidence gathering while incorporating the application of data science and health economics. METHODS Experts in 'interval imaging' (imaging at pre-planned time-points to assess tumour status); data science; health economics, trial management of adult brain tumours, and patient representatives convened in London, UK. The current evidence on the use of interval imaging for monitoring brain tumours was reviewed. To improve the evidence that interval imaging has a role in disease management, we discussed specific themes of data science, health economics, statistical considerations, patient and carer perspectives, and multi-centre study design. Suggestions for future studies aimed at filling knowledge gaps were discussed. RESULTS Meningioma and glioma were identified as priorities for interval imaging utility analysis. The "monitoring biomarkers" most commonly used in adult brain tumour patients were standard structural MRI features. Interval imaging was commonly scheduled to provide reported imaging prior to planned, regular clinic visits. There is limited evidence relating interval imaging in the absence of clinical deterioration to management change that alters morbidity, mortality, quality of life, or resource use. Progression-free survival is confounded as an outcome measure when using structural MRI in glioma. Uncertainty from imaging causes distress for some patients and their caregivers, while for others it provides an important indicator of disease activity. Any study design that changes imaging regimens should consider the potential for influencing current or planned therapeutic trials, ensure that opportunity costs are measured, and capture indirect benefits and added value. CONCLUSION Evidence for the value, and therefore utility, of regular interval imaging is currently lacking. Ongoing collaborative efforts will improve trial design and generate the evidence to optimise monitoring imaging biomarkers in standard of care brain tumour management.
Collapse
Affiliation(s)
- Thomas C. Booth
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
- Department of Neuroradiology, King’s College Hospital NHS Foundation Trust, London, United Kingdom
| | - Gerard Thompson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Florien Boele
- Leeds Institute of Medical Research at St James’s, St James’s University Hospital, Leeds, United Kingdom
- Faculty of Medicine and Health, Leeds Institute of Health Sciences, University of Leeds, Leeds, United Kingdom
| | | | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Liane Dos Santos Canas
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | | | - Keyoumars Ashkan
- Department of Neurosurgery, King’s College Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Nicky Huskens
- The Tessa Jowell Brain Cancer Mission, London, United Kingdom
| | - Aysha Luis
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Catherine McBain
- Department of Oncology, Christie Hospital NHS Foundation Trust, Manchester, United Kingdom
| | - Samantha J. Mills
- Department of Neuroradiology, The Walton Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Nick Morley
- Department of Radiology, Wales Research and Diagnostic PET Imaging Centre, Cardiff University School of Medicine, Cardiff, United Kingdom
| | - Caroline Murphy
- King’s College Trials Unit, King’s College London, London, United Kingdom
| | - Sebastian Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Mark Pennington
- King’s Health Economics, King’s College London, London, United Kingdom
| | - James Powell
- Department of Oncology, Velindre Cancer Centre, Cardiff, United Kingdom
| | - David Summers
- Department of Neuroradiology, Western General Hospital, Edinburgh, United Kingdom
| | - Adam D. Waldman
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Colin Watts
- Birmingham Brain Cancer Program, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
| | - Matthew Williams
- Department of Neuro-oncology, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Robin Grant
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Michael D. Jenkinson
- Institute of Translational Medicine, University of Liverpool, Liverpool, United Kingdom
- Department of Neurosurgery, The Walton Centre NHS Foundation Trust, Liverpool, United Kingdom
| |
Collapse
|
149
|
Abstract
Deep learning (DL) approaches to medical image analysis tasks have recently become popular; however, they suffer from a lack of human interpretability critical for both increasing understanding of the methods' operation and enabling clinical translation. This review summarizes currently available methods for performing image model interpretation and critically evaluates published uses of these methods for medical imaging applications. We divide model interpretation in two categories: (1) understanding model structure and function and (2) understanding model output. Understanding model structure and function summarizes ways to inspect the learned features of the model and how those features act on an image. We discuss techniques for reducing the dimensionality of high-dimensional data and cover autoencoders, both of which can also be leveraged for model interpretation. Understanding model output covers attribution-based methods, such as saliency maps and class activation maps, which produce heatmaps describing the importance of different parts of an image to the model prediction. We describe the mathematics behind these methods, give examples of their use in medical imaging, and compare them against one another. We summarize several published toolkits for model interpretation specific to medical imaging applications, cover limitations of current model interpretation methods, provide recommendations for DL practitioners looking to incorporate model interpretation into their task, and offer general discussion on the importance of model interpretation in medical imaging contexts.
Collapse
Affiliation(s)
- Daniel T. Huff
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Amy J. Weisman
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
150
|
Pankhania M. Artificial intelligence and radiology: Combating the COVID-19 conundrum. Indian J Radiol Imaging 2021; 31:S4-S10. [PMID: 33814755 PMCID: PMC7996687 DOI: 10.4103/ijri.ijri_618_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 08/27/2020] [Accepted: 09/21/2020] [Indexed: 12/24/2022] Open
Abstract
The COVID-19 pandemic has necessitated rapid testing and diagnosis to manage its spread. While reverse transcriptase polymerase chain reaction (RT-PCR) is being used as the gold standard method to diagnose COVID-19, many scientists and doctors have pointed out some challenges related to the variability, accuracy, and affordability of this technique. At the same time, radiological methods, which were being used to diagnose COVID-19 in the early phase of the pandemic in China, were sidelined by many primarily due to their low specificity and the difficulty in conducting a differential diagnosis. However, the utility of radiological methods cannot be neglected. Indeed, over the past few months, healthcare consultants and radiologists in India have been using or advising the use of high-resolution computed tomography (HRCT) of the chest for early diagnosis and tracking of COVID-19, particularly in preoperative and asymptomatic patients. At the same time, scientists have been trying to improve upon the radiological method of COVID-19 diagnosis and monitoring by using artificial intelligence (AI)-based interpretation models. This review is an effort to compile and compare such efforts. To this end, the latest scientific literature on the use of radiology and AI-assisted radiology for the diagnosis and monitoring of COVID-19 has been reviewed and presented, highlighting the strengths and limitations of such techniques.
Collapse
Affiliation(s)
- Mayur Pankhania
- Sahyog Imaging Centre, Department of Radiodiagnosis, PDU Medical College and Government Hospital, Rajkot, Gujarat, India
| |
Collapse
|