51
|
Gurevich E, El Hassan B, El Morr C. Equity within AI systems: What can health leaders expect? Healthc Manage Forum 2023; 36:119-124. [PMID: 36226507 PMCID: PMC9976641 DOI: 10.1177/08404704221125368] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Artificial Intelligence (AI) for health has a great potential; it has already proven to be successful in enhancing patient outcomes, facilitating professional work and benefiting administration. However, AI presents challenges related to health equity defined as an opportunity for people to reach their fullest health potential. This article discusses the opportunities and challenges that AI presents in health and examines ways in which inequities related to AI can be mitigated.
Collapse
Affiliation(s)
| | | | - Christo El Morr
- York University, Toronto, Ontario, Canada.,Christo El Morr, York University, Toronto, Ontario, Canada. E-mail:
| |
Collapse
|
52
|
Nakagawa K, Moukheiber L, Celi LA, Patel M, Mahmood F, Gondim D, Hogarth M, Levenson R. AI in Pathology: What could possibly go wrong? Semin Diagn Pathol 2023; 40:100-108. [PMID: 36882343 DOI: 10.1053/j.semdp.2023.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
The field of medicine is undergoing rapid digital transformation. Pathologists are now striving to digitize their data, workflows, and interpretations, assisted by the enabling development of whole-slide imaging. Going digital means that the analog process of human diagnosis can be augmented or even replaced by rapidly evolving AI approaches, which are just now entering into clinical practice. But with such progress comes challenges that reflect a variety of stressors, including the impact of unrepresentative training data with accompanying implicit bias, data privacy concerns, and fragility of algorithm performance. Beyond such core digital aspects, considerations arise related to difficulties presented by changing disease presentations, diagnostic approaches, and therapeutic options. While some tools such as data federation can help with broadening data diversity while preserving expertise and local control, they may not be the full answer to some of these issues. The impact of AI in pathology on the field's human practitioners is still very much unknown: installation of unconscious bias and deference to AI guidance need to be understood and addressed. If AI is widely adopted, it may remove many inefficiencies in daily practice and compensate for staff shortages. It may also cause practitioner deskilling, dethrilling, and burnout. We discuss the technological, clinical, legal, and sociological factors that will influence the adoption of AI in pathology, and its eventual impact for good or ill.
Collapse
Affiliation(s)
| | | | - Leo A Celi
- Massachusetts Institute of Technology, Cambridge, MA
| | | | | | | | | | | |
Collapse
|
53
|
Attri-VAE: Attribute-based interpretable representations of medical images with variational autoencoders. Comput Med Imaging Graph 2023; 104:102158. [PMID: 36638626 DOI: 10.1016/j.compmedimag.2022.102158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/06/2022] [Indexed: 12/13/2022]
Abstract
Deep learning (DL) methods where interpretability is intrinsically considered as part of the model are required to better understand the relationship of clinical and imaging-based attributes with DL outcomes, thus facilitating their use in the reasoning behind the medical decisions. Latent space representations built with variational autoencoders (VAE) do not ensure individual control of data attributes. Attribute-based methods enforcing attribute disentanglement have been proposed in the literature for classical computer vision tasks in benchmark data. In this paper, we propose a VAE approach, the Attri-VAE, that includes an attribute regularization term to associate clinical and medical imaging attributes with different regularized dimensions in the generated latent space, enabling a better-disentangled interpretation of the attributes. Furthermore, the generated attention maps explained the attribute encoding in the regularized latent space dimensions. Using the Attri-VAE approach we analyzed healthy and myocardial infarction patients with clinical, cardiac morphology, and radiomics attributes. The proposed model provided an excellent trade-off between reconstruction fidelity, disentanglement, and interpretability, outperforming state-of-the-art VAE approaches according to several quantitative metrics. The resulting latent space allowed the generation of realistic synthetic data in the trajectory between two distinct input samples or along a specific attribute dimension to better interpret changes between different cardiac conditions.
Collapse
|
54
|
Mahapatra D, Poellinger A, Reyes M. Graph Node Based Interpretability Guided Sample Selection for Active Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:661-673. [PMID: 36240033 DOI: 10.1109/tmi.2022.3215017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While supervised learning techniques have demonstrated state-of-the-art performance in many medical image analysis tasks, the role of sample selection is important. Selecting the most informative samples contributes to the system attaining optimum performance with minimum labeled samples, which translates to fewer expert interventions and cost. Active Learning (AL) methods for informative sample selection are effective in boosting performance of computer aided diagnosis systems when limited labels are available. Conventional approaches to AL have mostly focused on the single label setting where a sample has only one disease label from the set of possible labels. These approaches do not perform optimally in the multi-label setting where a sample can have multiple disease labels (e.g. in chest X-ray images). In this paper we propose a novel sample selection approach based on graph analysis to identify informative samples in a multi-label setting. For every analyzed sample, each class label is denoted as a separate node of a graph. Building on findings from interpretability of deep learning models, edge interactions in this graph characterize similarity between corresponding interpretability saliency map model encodings. We explore different types of graph aggregation to identify informative samples for active learning. We apply our method to public chest X-ray and medical image datasets, and report improved results over state-of-the-art AL techniques in terms of model performance, learning rates, and robustness.
Collapse
|
55
|
Bakrania A, Joshi N, Zhao X, Zheng G, Bhat M. Artificial intelligence in liver cancers: Decoding the impact of machine learning models in clinical diagnosis of primary liver cancers and liver cancer metastases. Pharmacol Res 2023; 189:106706. [PMID: 36813095 DOI: 10.1016/j.phrs.2023.106706] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/17/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
Liver cancers are the fourth leading cause of cancer-related mortality worldwide. In the past decade, breakthroughs in the field of artificial intelligence (AI) have inspired development of algorithms in the cancer setting. A growing body of recent studies have evaluated machine learning (ML) and deep learning (DL) algorithms for pre-screening, diagnosis and management of liver cancer patients through diagnostic image analysis, biomarker discovery and predicting personalized clinical outcomes. Despite the promise of these early AI tools, there is a significant need to explain the 'black box' of AI and work towards deployment to enable ultimate clinical translatability. Certain emerging fields such as RNA nanomedicine for targeted liver cancer therapy may also benefit from application of AI, specifically in nano-formulation research and development given that they are still largely reliant on lengthy trial-and-error experiments. In this paper, we put forward the current landscape of AI in liver cancers along with the challenges of AI in liver cancer diagnosis and management. Finally, we have discussed the future perspectives of AI application in liver cancer and how a multidisciplinary approach using AI in nanomedicine could accelerate the transition of personalized liver cancer medicine from bench side to the clinic.
Collapse
Affiliation(s)
- Anita Bakrania
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
| | | | - Xun Zhao
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Mamatha Bhat
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Division of Gastroenterology, Department of Medicine, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Medical Sciences, Toronto, ON, Canada.
| |
Collapse
|
56
|
AI: Can It Make a Difference to the Predictive Value of Ultrasound Breast Biopsy? Diagnostics (Basel) 2023; 13:diagnostics13040811. [PMID: 36832299 PMCID: PMC9955683 DOI: 10.3390/diagnostics13040811] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/17/2023] [Accepted: 02/18/2023] [Indexed: 02/23/2023] Open
Abstract
(1) Background: This study aims to compare the ground truth (pathology results) against the BI-RADS classification of images acquired while performing breast ultrasound diagnostic examinations that led to a biopsy and against the result of processing the same images through the AI algorithm KOIOS DS TM (KOIOS). (2) Methods: All results of biopsies performed with ultrasound guidance during 2019 were recovered from the pathology department. Readers selected the image which better represented the BI-RADS classification, confirmed correlation to the biopsied image, and submitted it to the KOIOS AI software. The results of the BI-RADS classification of the diagnostic study performed at our institution were set against the KOIOS classification and both were compared to the pathology reports. (3) Results: 403 cases were included in this study. Pathology rendered 197 malignant and 206 benign reports. Four biopsies on BI-RADS 0 and two images are included. Of fifty BI-RADS 3 cases biopsied, only seven rendered cancers. All but one had a positive or suspicious cytology; all were classified as suspicious by KOIOS. Using KOIOS, 17 B3 biopsies could have been avoided. Of 347 BI-RADS 4, 5, and 6 cases, 190 were malignant (54.7%). Because only KOIOS suspicious and probably malignant categories should be biopsied, 312 biopsies would have resulted in 187 malignant lesions (60%), but 10 cancers would have been missed. (4) Conclusions: KOIOS had a higher ratio of positive biopsies in this selected case study vis-à-vis the BI-RADS 4, 5 and 6 categories. A large number of biopsies in the BI-RADS 3 category could have been avoided.
Collapse
|
57
|
Xu X, Lin L, Sun S, Wu S. A review of the application of three-dimensional convolutional neural networks for the diagnosis of Alzheimer's disease using neuroimaging. Rev Neurosci 2023:revneuro-2022-0122. [PMID: 36729918 DOI: 10.1515/revneuro-2022-0122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/02/2023] [Indexed: 02/03/2023]
Abstract
Alzheimer's disease (AD) is a degenerative disorder that leads to progressive, irreversible cognitive decline. To obtain an accurate and timely diagnosis and detect AD at an early stage, numerous approaches based on convolutional neural networks (CNNs) using neuroimaging data have been proposed. Because 3D CNNs can extract more spatial discrimination information than 2D CNNs, they have emerged as a promising research direction in the diagnosis of AD. The aim of this article is to present the current state of the art in the diagnosis of AD using 3D CNN models and neuroimaging modalities, focusing on the 3D CNN architectures and classification methods used, and to highlight potential future research topics. To give the reader a better overview of the content mentioned in this review, we briefly introduce the commonly used imaging datasets and the fundamentals of CNN architectures. Then we carefully analyzed the existing studies on AD diagnosis, which are divided into two levels according to their inputs: 3D subject-level CNNs and 3D patch-level CNNs, highlighting their contributions and significance in the field. In addition, this review discusses the key findings and challenges from the studies and highlights the lessons learned as a roadmap for future research. Finally, we summarize the paper by presenting some major findings, identifying open research challenges, and pointing out future research directions.
Collapse
Affiliation(s)
- Xinze Xu
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Lan Lin
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Shen Sun
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Shuicai Wu
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| |
Collapse
|
58
|
Milam ME, Koo CW. The current status and future of FDA-approved artificial intelligence tools in chest radiology in the United States. Clin Radiol 2023; 78:115-122. [PMID: 36180271 DOI: 10.1016/j.crad.2022.08.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/19/2022] [Indexed: 01/18/2023]
Abstract
Artificial intelligence (AI) is becoming more widespread within radiology. Capabilities that AI algorithms currently provide include detection, segmentation, classification, and quantification of pathological findings. Artificial intelligence software have created challenges for the traditional United States Food and Drug Administration (FDA) approval process for medical devices given their abilities to evolve over time with incremental data input. Currently, there are 190 FDA-approved radiology AI-based software devices, 42 of which pertain specifically to thoracic radiology. The majority of these algorithms are approved for the detection and/or analysis of pulmonary nodules, for monitoring placement of endotracheal tubes and indwelling catheters, for detection of emergent findings, and for assessment of pulmonary parenchyma; however, as technology evolves, there are many other potential applications that can be explored. For example, evaluation of non-idiopathic pulmonary fibrosis interstitial lung diseases, synthesis of imaging, clinical and/or laboratory data to yield comprehensive diagnoses, and survival or prognosis prediction of certain pathologies. With increasing physician and developer engagement, transparency and frequent communication between developers and regulatory agencies, such as the FDA, AI medical devices will be able to provide a critical supplement to patient management and ultimately enhance physicians' ability to improve patient care.
Collapse
Affiliation(s)
- M E Milam
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - C W Koo
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
59
|
Hadjiiski L, Cha K, Chan HP, Drukker K, Morra L, Näppi JJ, Sahiner B, Yoshida H, Chen Q, Deserno TM, Greenspan H, Huisman H, Huo Z, Mazurchuk R, Petrick N, Regge D, Samala R, Summers RM, Suzuki K, Tourassi G, Vergara D, Armato SG. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023; 50:e1-e24. [PMID: 36565447 DOI: 10.1002/mp.16188] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/25/2022] Open
Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Collapse
Affiliation(s)
- Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Kenny Cha
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Berkman Sahiner
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Hayit Greenspan
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv, Israel & Department of Radiology, Ichan School of Medicine, Tel Aviv University, Mt Sinai, New York, New York, USA
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Zhimin Huo
- Tencent America, Palo Alto, California, USA
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | | | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy.,Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Ravi Samala
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Maryland, USA
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan
| | | | - Daniel Vergara
- Department of Radiology, Yale New Haven Hospital, New Haven, Connecticut, USA
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
60
|
Artificial intelligence in breast pathology - dawn of a new era. NPJ Breast Cancer 2023; 9:5. [PMID: 36720886 PMCID: PMC9889344 DOI: 10.1038/s41523-023-00507-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 01/10/2023] [Indexed: 02/02/2023] Open
|
61
|
Shen Y, Heacock L, Elias J, Hentel KD, Reig B, Shih G, Moy L. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology 2023; 307:e230163. [PMID: 36700838 DOI: 10.1148/radiol.230163] [Citation(s) in RCA: 194] [Impact Index Per Article: 194.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Affiliation(s)
- Yiqiu Shen
- New York University, Center for Data Science, 60 5th Ave, New York, NY 10011
| | - Laura Heacock
- New York University School of Medicine, Department of Radiology, 160 E 34th St, New York, NY 10016
| | - Jonathan Elias
- Weill Cornell Medicine, Department of Primary Care, 525 East 68th Street, New York, NY 10065
| | - Keith D Hentel
- Weill Cornell Medicine, Department of Radiology, 525 East 68th Street, New York, NY 10065
| | - Beatriu Reig
- New York University School of Medicine, Department of Radiology, 160 E 34th St, New York, NY 10016
| | - George Shih
- Weill Cornell Medicine, Department of Radiology, 525 East 68th Street, New York, NY 10065
| | - Linda Moy
- New York University School of Medicine, Department of Radiology, 160 E 34th St, New York, NY 10016
| |
Collapse
|
62
|
Blanes-Selva V, Asensio-Cuesta S, Doñate-Martínez A, Pereira Mesquita F, García-Gómez JM. User-centred design of a clinical decision support system for palliative care: Insights from healthcare professionals. Digit Health 2023; 9:20552076221150735. [PMID: 36644661 PMCID: PMC9837281 DOI: 10.1177/20552076221150735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 12/26/2022] [Indexed: 01/13/2023] Open
Abstract
Objective Although clinical decision support systems (CDSS) have many benefits for clinical practice, they also have several barriers to their acceptance by professionals. Our objective in this study was to design and validate The Aleph palliative care (PC) CDSS through a user-centred method, considering the predictions of the artificial intelligence (AI) core, usability and user experience (UX). Methods We performed two rounds of individual evaluation sessions with potential users. Each session included a model evaluation, a task test and a usability and UX assessment. Results The machine learning (ML) predictive models outperformed the participants in the three predictive tasks. System Usability Scale (SUS) reported 62.7 ± 14.1 and 65 ± 26.2 on a 100-point rating scale for both rounds, respectively, while User Experience Questionnaire - Short Version (UEQ-S) scores were 1.42 and 1.5 on the -3 to 3 scale. Conclusions The think-aloud method and including the UX dimension helped us to identify most of the workflow implementation issues. The system has good UX hedonic qualities; participants were interested in the tool and responded positively to it. Performance regarding usability was modest but acceptable.
Collapse
Affiliation(s)
- Vicent Blanes-Selva
- Biomedical Data Science Lab, Instituto Universitarios de Tecnologías de La Información y Comunicaciones (ITACA), Universitat Politècnica de València, Valencia, Spain,Vicent Blanes-Selva, Biomedical Data Science Lab, Instituto Universitarios de Tecnologías de La Información y Comunicaciones (ITACA), Universitat Politècnica de València, Valencia, 46022, Spain.
| | - Sabina Asensio-Cuesta
- Biomedical Data Science Lab, Instituto Universitarios de Tecnologías de La Información y Comunicaciones (ITACA), Universitat Politècnica de València, Valencia, Spain
| | | | - Felipe Pereira Mesquita
- Divisão de Hematologia, departamento de Clínica Médica, da Universidade Federal de Juiz de Fora, Minas Gerais, Brasil
| | - Juan M. García-Gómez
- Biomedical Data Science Lab, Instituto Universitarios de Tecnologías de La Información y Comunicaciones (ITACA), Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
63
|
Navidi Z, Sun J, Chan RH, Hanneman K, Al-Arnawoot A, Munim A, Rakowski H, Maron MS, Woo A, Wang B, Tsang W. Interpretable machine learning for automated left ventricular scar quantification in hypertrophic cardiomyopathy patients. PLOS DIGITAL HEALTH 2023; 2:e0000159. [PMID: 36812626 PMCID: PMC9931226 DOI: 10.1371/journal.pdig.0000159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Accepted: 11/09/2022] [Indexed: 01/06/2023]
Abstract
Scar quantification on cardiovascular magnetic resonance (CMR) late gadolinium enhancement (LGE) images is important in risk stratifying patients with hypertrophic cardiomyopathy (HCM) due to the importance of scar burden in predicting clinical outcomes. We aimed to develop a machine learning (ML) model that contours left ventricular (LV) endo- and epicardial borders and quantifies CMR LGE images from HCM patients.We retrospectively studied 2557 unprocessed images from 307 HCM patients followed at the University Health Network (Canada) and Tufts Medical Center (USA). LGE images were manually segmented by two experts using two different software packages. Using 6SD LGE intensity cutoff as the gold standard, a 2-dimensional convolutional neural network (CNN) was trained on 80% and tested on the remaining 20% of the data. Model performance was evaluated using the Dice Similarity Coefficient (DSC), Bland-Altman, and Pearson's correlation. The 6SD model DSC scores were good to excellent at 0.91 ± 0.04, 0.83 ± 0.03, and 0.64 ± 0.09 for the LV endocardium, epicardium, and scar segmentation, respectively. The bias and limits of agreement for the percentage of LGE to LV mass were low (-0.53 ± 2.71%), and correlation high (r = 0.92). This fully automated interpretable ML algorithm allows rapid and accurate scar quantification from CMR LGE images. This program does not require manual image pre-processing, and was trained with multiple experts and software, increasing its generalizability.
Collapse
Affiliation(s)
- Zeinab Navidi
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
- Department of Computer Science, University of Toronto, Toronto, Canada
- Vector Institute, Toronto, Canada
| | - Jesse Sun
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Raymond H. Chan
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Kate Hanneman
- Department of Radiology, University Health Network, University of Toronto, Toronto, Canada
| | - Amna Al-Arnawoot
- Department of Radiology, University Health Network, University of Toronto, Toronto, Canada
| | | | - Harry Rakowski
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Martin S. Maron
- Division of Cardiology, Tufts Medical Center, Boston, United States of America
| | - Anna Woo
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Bo Wang
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
- Department of Computer Science, University of Toronto, Toronto, Canada
- Vector Institute, Toronto, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada
| | - Wendy Tsang
- Division of Cardiology, Peter Munk Cardiac Center, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
- * E-mail:
| |
Collapse
|
64
|
Saeed W, Omlin C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
65
|
Wei R, Xu X, Duan Y, Zhang N, Sun J, Li H, Li Y, Li Y, Zeng C, Han X, Zhou F, Huang M, Li R, Zhuo Z, Barkhof F, H Cole J, Liu Y. Brain age gap in neuromyelitis optica spectrum disorders and multiple sclerosis. J Neurol Neurosurg Psychiatry 2023; 94:31-37. [PMID: 36216455 DOI: 10.1136/jnnp-2022-329680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 09/12/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE To evaluate the clinical significance of deep learning-derived brain age prediction in neuromyelitis optica spectrum disorder (NMOSD) relative to relapsing-remitting multiple sclerosis (RRMS). METHODS This cohort study used data retrospectively collected from 6 tertiary neurological centres in China between 2009 and 2018. In total, 199 patients with NMOSD and 200 patients with RRMS were studied alongside 269 healthy controls. Clinical follow-up was available in 85 patients with NMOSD and 124 patients with RRMS (mean duration NMOSD=5.8±1.9 (1.9-9.9) years, RRMS=5.2±1.7 (1.5-9.2) years). Deep learning was used to learn 'brain age' from MRI scans in the healthy controls and estimate the brain age gap (BAG) in patients. RESULTS A significantly higher BAG was found in the NMOSD (5.4±8.2 years) and RRMS (13.0±14.7 years) groups compared with healthy controls. A higher baseline disability score and advanced brain volume loss were associated with increased BAG in both patient groups. A longer disease duration was associated with increased BAG in RRMS. BAG significantly predicted Expanded Disability Status Scale worsening in patients with NMOSD and RRMS. CONCLUSIONS There is a clear BAG in NMOSD, although smaller than in RRMS. The BAG is a clinically relevant MRI marker in NMOSD and RRMS.
Collapse
Affiliation(s)
- Ren Wei
- Department of Radiology, Beijing Tiantan Hospital, Beijing, China
| | - Xiaolu Xu
- Department of Radiology, Beijing Tiantan Hospital, Beijing, China
| | - Yunyun Duan
- Department of Radiology, Beijing Tiantan Hospital, Beijing, China
| | - Ningnannan Zhang
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin, China
| | - Jie Sun
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin, China
| | - Haiqing Li
- Department of Radiology, Huashan Hospital Fudan University, Shanghai, China
| | - Yuxin Li
- Department of Radiology, Huashan Hospital Fudan University, Shanghai, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Chun Zeng
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xuemei Han
- Department of Neurology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Muhua Huang
- Department of Radiology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Runzhi Li
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Zhizheng Zhuo
- Department of Radiology, Beijing Tiantan Hospital, Beijing, China
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Neuroscience Campus Amsterdam, VU University Medical Centre Amsterdam, Amsterdam, The Netherlands
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - James H Cole
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Dementia Research Centre, Queen Square Institute of Neurology, University College London, London, UK
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Beijing, China
| |
Collapse
|
66
|
Kwak K, Stanford W, Dayan E. Identifying the regional substrates predictive of Alzheimer's disease progression through a convolutional neural network model and occlusion. Hum Brain Mapp 2022; 43:5509-5519. [PMID: 35904092 PMCID: PMC9704798 DOI: 10.1002/hbm.26026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 06/02/2022] [Accepted: 07/08/2022] [Indexed: 01/15/2023] Open
Abstract
Progressive brain atrophy is a key neuropathological hallmark of Alzheimer's disease (AD) dementia. However, atrophy patterns along the progression of AD dementia are diffuse and variable and are often missed by univariate methods. Consequently, identifying the major regional atrophy patterns underlying AD dementia progression is challenging. In the current study, we propose a method that evaluates the degree to which specific regional atrophy patterns are predictive of AD dementia progression, while holding all other atrophy changes constant using a total sample of 334 subjects. We first trained a dense convolutional neural network model to differentiate individuals with mild cognitive impairment (MCI) who progress to AD dementia versus those with a stable MCI diagnosis. Then, we retested the model multiple times, each time occluding different regions of interest (ROIs) from the model's testing set's input. We also validated this approach by occluding ROIs based on Braak's staging scheme. We found that the hippocampus, fusiform, and inferior temporal gyri were the strongest predictors of AD dementia progression, in agreement with established staging models. We also found that occlusion of limbic ROIs defined according to Braak stage III had the largest impact on the performance of the model. Our predictive model reveals the major regional patterns of atrophy predictive of AD dementia progression. These results highlight the potential for early diagnosis and stratification of individuals with prodromal AD dementia based on patterns of cortical atrophy, prior to interventional clinical trials.
Collapse
Affiliation(s)
- Kichang Kwak
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - William Stanford
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Eran Dayan
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Department of RadiologyUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | | |
Collapse
|
67
|
Müller L, Kloeckner R, Mildenberger P, Pinto Dos Santos D. [Validation and implementation of artificial intelligence in radiology : Quo vadis in 2022?]. RADIOLOGIE (HEIDELBERG, GERMANY) 2022; 63:381-386. [PMID: 36510007 DOI: 10.1007/s00117-022-01097-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/17/2022] [Indexed: 12/14/2022]
Abstract
BACKGROUND The hype around artificial intelligence (AI) in radiology continues and the number of approved AI tools is growing steadily. Despite the great potential, integration into clinical routine in radiology remains limited. In addition, the large number of individual applications poses a challenge for clinical routine, as individual applications have to be selected for different questions and organ systems, which increases the complexity and time required. OBJECTIVES This review will discuss the current status of validation and implementation of AI tools in clinical routine, and identify possible approaches for an improved assessment of the generalizability of results of AI tools. MATERIALS AND METHODS A literature search in various literature and product databases as well as publications, position papers, and reports from various stakeholders was conducted for this review. RESULTS Scientific evidence and independent validation studies are available for only a few commercial AI tools and the generalizability of the results often remains questionable. CONCLUSIONS One challenge is the multitude of offerings for individual, specific application areas by a large number of manufacturers, making integration into the existing site-specific IT infrastructure more difficult. Furthermore, remuneration for the use of AI tools in clinical routine by health insurance companies in Germany is lacking. But in order for reimbursement to be granted, the clinical utility of new applications must first be proven. Such proof, however, is lacking for most applications.
Collapse
Affiliation(s)
- Lukas Müller
- Klinik und Poliklinik für Diagnostische und Interventionelle Radiologie, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland.
| | - Roman Kloeckner
- Institut für Interventionelle Radiologie, Universitätsklinikum Schleswig-Holstein - Campus Lübeck, Lübeck, Deutschland
| | - Peter Mildenberger
- Klinik und Poliklinik für Diagnostische und Interventionelle Radiologie, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland
| | - Daniel Pinto Dos Santos
- Institut für Diagnostische und Interventionelle Radiologie, Uniklinik Köln, Köln, Deutschland.,Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Frankfurt am Main, Deutschland
| |
Collapse
|
68
|
Santos GNM, da Silva HEC, Figueiredo PTDS, Mesquita CRM, Melo NS, Stefani CM, Leite AF. The Introduction of Artificial Intelligence in Diagnostic Radiology Curricula: a Text and Opinion Systematic Review. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2022. [DOI: 10.1007/s40593-022-00324-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
69
|
Walston SL, Matsumoto T, Miki Y, Ueda D. Artificial intelligence-based model for COVID-19 prognosis incorporating chest radiographs and clinical data; a retrospective model development and validation study. Br J Radiol 2022; 95:20220058. [PMID: 36193755 PMCID: PMC9733620 DOI: 10.1259/bjr.20220058] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 08/19/2022] [Accepted: 08/23/2022] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The purpose of this study was to develop an artificial intelligence-based model to prognosticate COVID-19 patients at admission by combining clinical data and chest radiographs. METHODS This retrospective study used the Stony Brook University COVID-19 dataset of 1384 inpatients. After exclusions, 1356 patients were randomly divided into training (1083) and test datasets (273). We implemented three artificial intelligence models, which classified mortality, ICU admission, or ventilation risk. Each model had three submodels with different inputs: clinical data, chest radiographs, and both. We showed the importance of the variables using SHapley Additive exPlanations (SHAP) values. RESULTS The mortality prediction model was best overall with area under the curve, sensitivity, specificity, and accuracy of 0.79 (0.72-0.86), 0.74 (0.68-0.79), 0.77 (0.61-0.88), and 0.74 (0.69-0.79) for the clinical data-based model; 0.77 (0.69-0.85), 0.67 (0.61-0.73), 0.81 (0.67-0.92), 0.70 (0.64-0.75) for the image-based model, and 0.86 (0.81-0.91), 0.76 (0.70-0.81), 0.77 (0.61-0.88), 0.76 (0.70-0.81) for the mixed model. The mixed model had the best performance (p value < 0.05). The radiographs ranked fourth for prognostication overall, and first of the inpatient tests assessed. CONCLUSIONS These results suggest that prognosis models become more accurate if AI-derived chest radiograph features and clinical data are used together. ADVANCES IN KNOWLEDGE This AI model evaluates chest radiographs together with clinical data in order to classify patients as having high or low mortality risk. This work shows that chest radiographs taken at admission have significant COVID-19 prognostic information compared to clinical data other than age and sex.
Collapse
Affiliation(s)
| | | | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University,1-4-3 Asahi-machi, Abeno-ku, Osaka, Japan
| | | |
Collapse
|
70
|
Gerussi A, Scaravaglio M, Cristoferi L, Verda D, Milani C, De Bernardi E, Ippolito D, Asselta R, Invernizzi P, Kather JN, Carbone M. Artificial intelligence for precision medicine in autoimmune liver disease. Front Immunol 2022; 13:966329. [PMID: 36439097 PMCID: PMC9691668 DOI: 10.3389/fimmu.2022.966329] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 10/13/2022] [Indexed: 09/10/2023] Open
Abstract
Autoimmune liver diseases (AiLDs) are rare autoimmune conditions of the liver and the biliary tree with unknown etiology and limited treatment options. AiLDs are inherently characterized by a high degree of complexity, which poses great challenges in understanding their etiopathogenesis, developing novel biomarkers and risk-stratification tools, and, eventually, generating new drugs. Artificial intelligence (AI) is considered one of the best candidates to support researchers and clinicians in making sense of biological complexity. In this review, we offer a primer on AI and machine learning for clinicians, and discuss recent available literature on its applications in medicine and more specifically how it can help to tackle major unmet needs in AiLDs.
Collapse
Affiliation(s)
- Alessio Gerussi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Miki Scaravaglio
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Laura Cristoferi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
- Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | | | - Chiara Milani
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Elisabetta De Bernardi
- Department of Medicine and Surgery and Tecnomed Foundation, University of Milano - Bicocca, Monza, Italy
| | | | - Rosanna Asselta
- Humanitas Clinical and Research Center, Rozzano, Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
| | - Pietro Invernizzi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Marco Carbone
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- European Reference Network on Hepatological Diseases (ERN RARE-LIVER), San Gerardo Hospital, Monza, Italy
| |
Collapse
|
71
|
A Systematic Review on the Use of Explainability in Deep Learning Systems for Computer Aided Diagnosis in Radiology: Limited Use of Explainable AI? Eur J Radiol 2022; 157:110592. [DOI: 10.1016/j.ejrad.2022.110592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/19/2022] [Accepted: 11/01/2022] [Indexed: 11/06/2022]
|
72
|
Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, Holmes JH. A manifesto on explainability for artificial intelligence in medicine. Artif Intell Med 2022; 133:102423. [PMID: 36328669 DOI: 10.1016/j.artmed.2022.102423] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 10/04/2022] [Accepted: 10/04/2022] [Indexed: 12/13/2022]
Abstract
The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.
Collapse
Affiliation(s)
| | | | | | | | - Jason H Moore
- Cedars-Sinai Medical Center, West Hollywood, CA, USA
| | - Marinka Zitnik
- Harvard Medical School and Broad Institute of MIT & Harvard, MA, USA
| | - John H Holmes
- University of Pennsylvania Perelman School of Medicine Philadelphia, PA, USA
| |
Collapse
|
73
|
Bahl M. Artificial Intelligence in Clinical Practice: Implementation Considerations and Barriers. JOURNAL OF BREAST IMAGING 2022; 4:632-639. [PMID: 36530476 PMCID: PMC9741727 DOI: 10.1093/jbi/wbac065] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Indexed: 09/06/2023]
Abstract
The rapid growth of artificial intelligence (AI) in radiology has led to Food and Drug Administration clearance of more than 20 AI algorithms for breast imaging. The steps involved in the clinical implementation of an AI product include identifying all stakeholders, selecting the appropriate product to purchase, evaluating it with a local data set, integrating it into the workflow, and monitoring its performance over time. Despite the potential benefits of improved quality and increased efficiency with AI, several barriers, such as high costs and liability concerns, may limit its widespread implementation. This article lists currently available AI products for breast imaging, describes the key elements of clinical implementation, and discusses barriers to clinical implementation.
Collapse
Affiliation(s)
- Manisha Bahl
- Massachusetts General Hospital, Department of Radiology, Boston, MA, USA
| |
Collapse
|
74
|
You S, Reyes M. Influence of contrast and texture based image modifications on the performance and attention shift of U-Net models for brain tissue segmentation. FRONTIERS IN NEUROIMAGING 2022; 1:1012639. [PMID: 37555149 PMCID: PMC10406260 DOI: 10.3389/fnimg.2022.1012639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/12/2022] [Indexed: 08/10/2023]
Abstract
Contrast and texture modifications applied during training or test-time have recently shown promising results to enhance the generalization performance of deep learning segmentation methods in medical image analysis. However, a deeper understanding of this phenomenon has not been investigated. In this study, we investigated this phenomenon using a controlled experimental setting, using datasets from the Human Connectome Project and a large set of simulated MR protocols, in order to mitigate data confounders and investigate possible explanations as to why model performance changes when applying different levels of contrast and texture-based modifications. Our experiments confirm previous findings regarding the improved performance of models subjected to contrast and texture modifications employed during training and/or testing time, but further show the interplay when these operations are combined, as well as the regimes of model improvement/worsening across scanning parameters. Furthermore, our findings demonstrate a spatial attention shift phenomenon of trained models, occurring for different levels of model performance, and varying in relation to the type of applied image modification.
Collapse
Affiliation(s)
- Suhang You
- Medical Image Analysis Group, ARTORG, Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | |
Collapse
|
75
|
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Boškoski I, Padoy N, Hashimoto DA. Computer vision in surgery: from potential to clinical value. NPJ Digit Med 2022; 5:163. [PMID: 36307544 PMCID: PMC9616906 DOI: 10.1038/s41746-022-00707-5] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
Collapse
Affiliation(s)
- Pietro Mascagni
- Gemelli Hospital, Catholic University of the Sacred Heart, Rome, Italy. .,IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France. .,Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France.,Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Maria S Altieri
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Amin Madani
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yusuke Watanabe
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Hokkaido, Hokkaido, Japan
| | - Adnan Alseidi
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Jay A Redan
- Department of Surgery, AdventHealth-Celebration Health, Celebration, FL, USA
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ivo Boškoski
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.,ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Daniel A Hashimoto
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
76
|
Khosravi B, Rouzrokh P, Faghani S, Moassefi M, Vahdati S, Mahmoudi E, Chalian H, Erickson BJ. Machine Learning and Deep Learning in Cardiothoracic Imaging: A Scoping Review. Diagnostics (Basel) 2022; 12:2512. [PMID: 36292201 PMCID: PMC9600598 DOI: 10.3390/diagnostics12102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/14/2022] [Accepted: 10/15/2022] [Indexed: 01/17/2023] Open
Abstract
Machine-learning (ML) and deep-learning (DL) algorithms are part of a group of modeling algorithms that grasp the hidden patterns in data based on a training process, enabling them to extract complex information from the input data. In the past decade, these algorithms have been increasingly used for image processing, specifically in the medical domain. Cardiothoracic imaging is one of the early adopters of ML/DL research, and the COVID-19 pandemic resulted in more research focus on the feasibility and applications of ML/DL in cardiothoracic imaging. In this scoping review, we systematically searched available peer-reviewed medical literature on cardiothoracic imaging and quantitatively extracted key data elements in order to get a big picture of how ML/DL have been used in the rapidly evolving cardiothoracic imaging field. During this report, we provide insights on different applications of ML/DL and some nuances pertaining to this specific field of research. Finally, we provide general suggestions on how researchers can make their research more than just a proof-of-concept and move toward clinical adoption.
Collapse
Affiliation(s)
- Bardia Khosravi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Pouria Rouzrokh
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Shahriar Faghani
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Mana Moassefi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Sanaz Vahdati
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Elham Mahmoudi
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Hamid Chalian
- Department of Radiology, Cardiothoracic Imaging, University of Washington, Seattle, WA 98195, USA
| | - Bradley J. Erickson
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
77
|
Watanabe A, Ketabi S, Namdar K, Khalvati F. Improving disease classification performance and explainability of deep learning models in radiology with heatmap generators. FRONTIERS IN RADIOLOGY 2022; 2:991683. [PMID: 37492678 PMCID: PMC10365129 DOI: 10.3389/fradi.2022.991683] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 09/21/2022] [Indexed: 07/27/2023]
Abstract
As deep learning is widely used in the radiology field, the explainability of Artificial Intelligence (AI) models is becoming increasingly essential to gain clinicians' trust when using the models for diagnosis. In this research, three experiment sets were conducted with a U-Net architecture to improve the disease classification performance while enhancing the heatmaps corresponding to the model's focus through incorporating heatmap generators during training. All experiments used the dataset that contained chest radiographs, associated labels from one of the three conditions ["normal", "congestive heart failure (CHF)", and "pneumonia"], and numerical information regarding a radiologist's eye-gaze coordinates on the images. The paper that introduced this dataset developed a U-Net model, which was treated as the baseline model for this research, to show how the eye-gaze data can be used in multi-modal training for explainability improvement and disease classification. To compare the classification performances among this research's three experiment sets and the baseline model, the 95% confidence intervals (CI) of the area under the receiver operating characteristic curve (AUC) were measured. The best method achieved an AUC of 0.913 with a 95% CI of [0.860, 0.966]. "Pneumonia" and "CHF" classes, which the baseline model struggled the most to classify, had the greatest improvements, resulting in AUCs of 0.859 with a 95% CI of [0.732, 0.957] and 0.962 with a 95% CI of [0.933, 0.989], respectively. The decoder of the U-Net for the best-performing proposed method generated heatmaps that highlight the determining image parts in model classifications. These predicted heatmaps, which can be used for the explainability of the model, also improved to align well with the radiologist's eye-gaze data. Hence, this work showed that incorporating heatmap generators and eye-gaze information into training can simultaneously improve disease classification and provide explainable visuals that align well with how the radiologist viewed the chest radiographs when making diagnosis.
Collapse
Affiliation(s)
- Akino Watanabe
- Engineering Science, University of Toronto, Toronto, ON, Canada
| | - Sara Ketabi
- Department of Diagnostic Imaging, Neurosciences / Mental Health Research Program, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| | - Khashayar Namdar
- Department of Diagnostic Imaging, Neurosciences / Mental Health Research Program, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Farzad Khalvati
- Department of Diagnostic Imaging, Neurosciences / Mental Health Research Program, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
78
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
79
|
Lipkova J, Chen RJ, Chen B, Lu MY, Barbieri M, Shao D, Vaidya AJ, Chen C, Zhuang L, Williamson DFK, Shaban M, Chen TY, Mahmood F. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 2022; 40:1095-1110. [PMID: 36220072 PMCID: PMC10655164 DOI: 10.1016/j.ccell.2022.09.012] [Citation(s) in RCA: 87] [Impact Index Per Article: 43.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/12/2022] [Accepted: 09/15/2022] [Indexed: 02/07/2023]
Abstract
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Collapse
Affiliation(s)
- Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Department of Computer Science, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Matteo Barbieri
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Anurag J Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Chengkuan Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Luoting Zhuang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
80
|
Alderden J, Kennerly SM, Wilson A, Dimas J, McFarland C, Yap DY, Zhao L, Yap TL. Explainable Artificial Intelligence for Predicting Hospital-Acquired Pressure Injuries in COVID-19-Positive Critical Care Patients. Comput Inform Nurs 2022; 40:659-665. [PMID: 36206146 PMCID: PMC9555852 DOI: 10.1097/cin.0000000000000943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
81
|
Quality assessment of machine learning models for diagnostic imaging in orthopaedics: A systematic review. Artif Intell Med 2022; 132:102396. [DOI: 10.1016/j.artmed.2022.102396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 08/30/2022] [Indexed: 01/17/2023]
|
82
|
Li X, Liu X, Deng X, Fan Y. Interplay between Artificial Intelligence and Biomechanics Modeling in the Cardiovascular Disease Prediction. Biomedicines 2022; 10:2157. [PMID: 36140258 PMCID: PMC9495955 DOI: 10.3390/biomedicines10092157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/16/2022] Open
Abstract
Cardiovascular disease (CVD) is the most common cause of morbidity and mortality worldwide, and early accurate diagnosis is the key point for improving and optimizing the prognosis of CVD. Recent progress in artificial intelligence (AI), especially machine learning (ML) technology, makes it possible to predict CVD. In this review, we first briefly introduced the overview development of artificial intelligence. Then we summarized some ML applications in cardiovascular diseases, including ML-based models to directly predict CVD based on risk factors or medical imaging findings and the ML-based hemodynamics with vascular geometries, equations, and methods for indirect assessment of CVD. We also discussed case studies where ML could be used as the surrogate for computational fluid dynamics in data-driven models and physics-driven models. ML models could be a surrogate for computational fluid dynamics, accelerate the process of disease prediction, and reduce manual intervention. Lastly, we briefly summarized the research difficulties and prospected the future development of AI technology in cardiovascular diseases.
Collapse
Affiliation(s)
- Xiaoyin Li
- Beijing Advanced Innovation Centre for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Chinese Education Ministry, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiao Liu
- Beijing Advanced Innovation Centre for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Chinese Education Ministry, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaoyan Deng
- Beijing Advanced Innovation Centre for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Chinese Education Ministry, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Yubo Fan
- Beijing Advanced Innovation Centre for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Chinese Education Ministry, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- School of Engineering Medicine, Beihang University, Beijing 100083, China
| |
Collapse
|
83
|
Meedeniya D, Kumarasinghe H, Kolonne S, Fernando C, Díez IDLT, Marques G. Chest X-ray analysis empowered with deep learning: A systematic review. Appl Soft Comput 2022; 126:109319. [PMID: 36034154 PMCID: PMC9393235 DOI: 10.1016/j.asoc.2022.109319] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 03/16/2022] [Accepted: 07/12/2022] [Indexed: 11/12/2022]
Abstract
Chest radiographs are widely used in the medical domain and at present, chest X-radiation particularly plays an important role in the diagnosis of medical conditions such as pneumonia and COVID-19 disease. The recent developments of deep learning techniques led to a promising performance in medical image classification and prediction tasks. With the availability of chest X-ray datasets and emerging trends in data engineering techniques, there is a growth in recent related publications. Recently, there have been only a few survey papers that addressed chest X-ray classification using deep learning techniques. However, they lack the analysis of the trends of recent studies. This systematic review paper explores and provides a comprehensive analysis of the related studies that have used deep learning techniques to analyze chest X-ray images. We present the state-of-the-art deep learning based pneumonia and COVID-19 detection solutions, trends in recent studies, publicly available datasets, guidance to follow a deep learning process, challenges and potential future research directions in this domain. The discoveries and the conclusions of the reviewed work have been organized in a way that researchers and developers working in the same domain can use this work to support them in taking decisions on their research.
Collapse
|
84
|
Natural Language Processing in Radiology: Update on Clinical Applications. J Am Coll Radiol 2022; 19:1271-1285. [PMID: 36029890 DOI: 10.1016/j.jacr.2022.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/25/2022] [Accepted: 06/03/2022] [Indexed: 11/24/2022]
Abstract
Radiological reports are a valuable source of information used to guide clinical care and support research. Organizing and managing this content, however, frequently requires several manual curations due to the more common unstructured nature of the reports. However, manual review of these reports for clinical knowledge extraction is costly and time-consuming. Natural language processing (NLP) is a set of methods developed to extract structured meaning from a body of text and can be used to optimize the workflow of health care professionals. Specifically, NLP methods can help radiologists as decision support systems and improve the management of patients' medical data. In this study, we highlight the opportunities offered by NLP in the field of radiology. A comprehensive review of the most commonly used NLP methods to extract information from radiological reports and the development of tools to improve radiological workflow using this information is presented. Finally, we review the important limitations of these tools and discuss the relevant observations and trends in the application of NLP to radiology that could benefit the field in the future.
Collapse
|
85
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
86
|
Breast cancer patient characterisation and visualisation using deep learning and fisher information networks. Sci Rep 2022; 12:14004. [PMID: 35978031 PMCID: PMC9385866 DOI: 10.1038/s41598-022-17894-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is the most commonly diagnosed female malignancy globally, with better survival rates if diagnosed early. Mammography is the gold standard in screening programmes for breast cancer, but despite technological advances, high error rates are still reported. Machine learning techniques, and in particular deep learning (DL), have been successfully used for breast cancer detection and classification. However, the added complexity that makes DL models so successful reduces their ability to explain which features are relevant to the model, or whether the model is biased. The main aim of this study is to propose a novel visualisation to help characterise breast cancer patients using Fisher Information Networks on features extracted from mammograms using a DL model. In the proposed visualisation, patients are mapped out according to their similarities and can be used to study new patients as a ‘patient-like-me’ approach. When applied to the CBIS-DDSM dataset, it was shown that it is a competitive methodology that can (i) facilitate the analysis and decision-making process in breast cancer diagnosis with the assistance of the FIN visualisations and ‘patient-like-me’ analysis, and (ii) help improve diagnostic accuracy and reduce overdiagnosis by identifying the most likely diagnosis based on clinical similarities with neighbouring patients.
Collapse
|
87
|
Chamberlin JH, Aquino G, Nance S, Wortham A, Leaphart N, Paladugu N, Brady S, Baird H, Fiegel M, Fitzpatrick L, Kocher M, Ghesu F, Mansoor A, Hoelzer P, Zimmermann M, James WE, Dennis DJ, Houston BA, Kabakus IM, Baruah D, Schoepf UJ, Burt JR. Automated diagnosis and prognosis of COVID-19 pneumonia from initial ER chest X-rays using deep learning. BMC Infect Dis 2022; 22:637. [PMID: 35864468 PMCID: PMC9301895 DOI: 10.1186/s12879-022-07617-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 07/14/2022] [Indexed: 11/10/2022] Open
Abstract
Background Airspace disease as seen on chest X-rays is an important point in triage for patients initially presenting to the emergency department with suspected COVID-19 infection. The purpose of this study is to evaluate a previously trained interpretable deep learning algorithm for the diagnosis and prognosis of COVID-19 pneumonia from chest X-rays obtained in the ED. Methods This retrospective study included 2456 (50% RT-PCR positive for COVID-19) adult patients who received both a chest X-ray and SARS-CoV-2 RT-PCR test from January 2020 to March of 2021 in the emergency department at a single U.S. institution. A total of 2000 patients were included as an additional training cohort and 456 patients in the randomized internal holdout testing cohort for a previously trained Siemens AI-Radiology Companion deep learning convolutional neural network algorithm. Three cardiothoracic fellowship-trained radiologists systematically evaluated each chest X-ray and generated an airspace disease area-based severity score which was compared against the same score produced by artificial intelligence. The interobserver agreement, diagnostic accuracy, and predictive capability for inpatient outcomes were assessed. Principal statistical tests used in this study include both univariate and multivariate logistic regression. Results Overall ICC was 0.820 (95% CI 0.790–0.840). The diagnostic AUC for SARS-CoV-2 RT-PCR positivity was 0.890 (95% CI 0.861–0.920) for the neural network and 0.936 (95% CI 0.918–0.960) for radiologists. Airspace opacities score by AI alone predicted ICU admission (AUC = 0.870) and mortality (0.829) in all patients. Addition of age and BMI into a multivariate log model improved mortality prediction (AUC = 0.906). Conclusion The deep learning algorithm provides an accurate and interpretable assessment of the disease burden in COVID-19 pneumonia on chest radiographs. The reported severity scores correlate with expert assessment and accurately predicts important clinical outcomes. The algorithm contributes additional prognostic information not currently incorporated into patient management.
Supplementary Information The online version contains supplementary material available at 10.1186/s12879-022-07617-7.
Collapse
Affiliation(s)
- Jordan H Chamberlin
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Gilberto Aquino
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sophia Nance
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Andrew Wortham
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Nathan Leaphart
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Namrata Paladugu
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sean Brady
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Henry Baird
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Matthew Fiegel
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Logan Fitzpatrick
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Madison Kocher
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | | | | | | | | | - W Ennis James
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - D Jameson Dennis
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - Brian A Houston
- Department of Internal Medicine, Division of Cardiology, Medical University of South Carolina, Charleston, SC, USA
| | - Ismail M Kabakus
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Dhiraj Baruah
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - U Joseph Schoepf
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Jeremy R Burt
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA. .,MUSC-ART, Cardiothoracic Imaging, 25 Courtenay Drive, MSC 226, 2nd Floor, Rm 2256, Charleston, SC, 29425, USA.
| |
Collapse
|
88
|
Chang Y, Zhang J, Pham HA, Lyu J, Li Z. Interpretable Dimension Reduction for MRI Channel Suppression. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1456-1459. [PMID: 36085960 DOI: 10.1109/embc48229.2022.9871474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Channel suppression can reduce the redundant information in multiple channel receiver coils and accelerate reconstruction speed to meet real-time imaging requirements. The principal component analysis has been used for channel suppression, but it is difficult to be interpreted because all channels contribute to principal components. Furthermore, the importance of interpretability in machine learning has recently attracted increasing attention in radiology. To improve the interpretability of PCA-based channel suppression, a sparse PCA method is proposed to reduce the most coils' loadings to be zero. Channel suppression is formulated as solving a nonlinear eigenvalue problem using the inverse power method instead of the direct matrix decomposition. Experimental results of in vivo data show that the sparse PCA-based channel suppression not only improves the interpretability with sparse channels, but also improves reconstruction quality compared to the standard PCA-based reconstruction with the similar reconstruction time.
Collapse
|
89
|
Cruz-Bastida JP, Pearson E, Al-Hallaq H. Toward understanding deep learning classification of anatomic sites: lessons from the development of a CBCT projection classifier. J Med Imaging (Bellingham) 2022; 9:045002. [PMID: 35903414 PMCID: PMC9311487 DOI: 10.1117/1.jmi.9.4.045002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep learning (DL) applications strongly depend on the training dataset and convolutional neural network architecture; however, it is unclear how to objectively select such parameters. We investigate the classification performance of different DL models and training schemes for the anatomic classification of cone-beam computed tomography (CBCT) projections. Approach: CBCT scans from 1055 patients were collected and manually classified into five anatomic classes and used to develop DL models to predict the anatomic class from single x-ray projections. VGG-16, Xception, and Inception v3 architectures were trained with 75% of the data, and the remaining 25% was used for testing and evaluation. To study the dependence of the classification performance on dataset size, training data was downsampled to various dataset sizes. Gradient-weighted class activation maps (grad-CAM) were generated using the model with highest classification performance, to identify regions with strong influence on CNN decisions. Results: The highest precision and recall values were achieved with VGG-16. One of the best performing combinations was the VGG-16 trained with 90 deg projections (mean class precision = 0.87). The training dataset size could be reduced to ∼ 50 % of its initial size, without compromising the classification performance. For correctly classified cases, Grad-CAM were more heavily weighted for anatomically relevant regions. Conclusions: It was possible to determine those dependencies with a higher influence on the classification performance of DL models for the studied task. Grad-CAM enabled the identification of possible sources of class confusion.
Collapse
Affiliation(s)
- Juan P Cruz-Bastida
- University of Chicago, Department of Radiology, Chicago, Illinois, United States.,University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| | - Erik Pearson
- University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| | - Hania Al-Hallaq
- University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| |
Collapse
|
90
|
Roemer FW, Guermazi A, Demehri S, Wirth W, Kijowski R. Imaging in Osteoarthritis. Osteoarthritis Cartilage 2022; 30:913-934. [PMID: 34560261 DOI: 10.1016/j.joca.2021.04.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/22/2021] [Accepted: 04/28/2021] [Indexed: 02/02/2023]
Abstract
Osteoarthritis (OA) is the most frequent form of arthritis with major implications on both individual and public health care levels. The field of joint imaging, and particularly magnetic resonance imaging (MRI), has evolved rapidly due to the application of technical advances to the field of clinical research. This narrative review will provide an introduction to the different aspects of OA imaging aimed at an audience of scientists, clinicians, students, industry employees, and others who are interested in OA but who do not necessarily focus on OA. The current role of radiography and recent advances in measuring joint space width will be discussed. The status of cartilage morphology assessment and evaluation of cartilage biochemical composition will be presented. Advances in quantitative three-dimensional morphologic cartilage assessment and semi-quantitative whole-organ assessment of OA will be reviewed. Although MRI has evolved as the most important imaging method used in OA research, other modalities such as ultrasound, computed tomography, and metabolic imaging play a complementary role and will also be discussed.
Collapse
Affiliation(s)
- F W Roemer
- Quantitative Imaging Center, Department of Radiology, Boston University School of Medicine, FGH Building, 3rd Floor, 820 Harrison Ave, Boston, MA, 02118, USA; Department of Radiology, Friedrich-Alexander University Erlangen-Nürnberg (FAU) and Universitätsklinikum Erlangen, Maximiliansplatz 3, Erlangen, 91054, Germany.
| | - A Guermazi
- Quantitative Imaging Center, Department of Radiology, Boston University School of Medicine, FGH Building, 3rd Floor, 820 Harrison Ave, Boston, MA, 02118, USA; Department of Radiology, VA Boston Healthcare System, 1400 VFW Pkwy, Suite 1B105, West Roxbury, MA, 02132, USA
| | - S Demehri
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 600 N. Wolf Street, Park 311, Baltimore, MD, 21287, USA
| | - W Wirth
- Institute of Anatomy, Paracelsus Medical University Salzburg, Salzburg, Austria, Nüremberg, Germany; Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University Salzburg, Strubergasse 21, 5020, Salzburg, Austria; Chondrometrics, GmbH, Freilassing, Germany
| | - R Kijowski
- Department of Radiology, New York University Grossmann School of Medicine, 550 1st Avenue, 3nd Floor, New York, NY, 10016, USA
| |
Collapse
|
91
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
92
|
Choi S, Cho SI, Ma M, Park S, Pereira S, Aum BJ, Shin S, Paeng K, Yoo D, Jung W, Ock CY, Lee SH, Choi YL, Chung JH, Mok TS, Kim H, Kim S. Artificial intelligence–powered programmed death ligand 1 analyser reduces interobserver variation in tumour proportion score for non–small cell lung cancer with better prediction of immunotherapy response. Eur J Cancer 2022; 170:17-26. [DOI: 10.1016/j.ejca.2022.04.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 03/10/2022] [Accepted: 04/04/2022] [Indexed: 12/23/2022]
|
93
|
Teng Q, Liu Z, Song Y, Han K, Lu Y. A survey on the interpretability of deep learning in medical diagnosis. MULTIMEDIA SYSTEMS 2022; 28:2335-2355. [PMID: 35789785 PMCID: PMC9243744 DOI: 10.1007/s00530-022-00960-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are "black-box" structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized.
Collapse
Affiliation(s)
- Qiaoying Teng
- School of Computer Science, Jilin Normal University, Siping, 136000 China
| | - Zhe Liu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Yuqing Song
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Kai Han
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Yang Lu
- School of Computer Science, Jilin Normal University, Siping, 136000 China
| |
Collapse
|
94
|
Trimpl MJ, Primakov S, Lambin P, Stride EPJ, Vallis KA, Gooding MJ. Beyond automatic medical image segmentation-the spectrum between fully manual and fully automatic delineation. Phys Med Biol 2022; 67. [PMID: 35523158 DOI: 10.1088/1361-6560/ac6d9c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 05/06/2022] [Indexed: 12/19/2022]
Abstract
Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.
Collapse
Affiliation(s)
- Michael J Trimpl
- Mirada Medical Ltd, Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Eleanor P J Stride
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Katherine A Vallis
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
95
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
96
|
Scalco E, Rizzo G, Mastropietro A. The stability of oncologic MRI radiomic features and the potential role of deep learning: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac60b9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 03/24/2022] [Indexed: 11/11/2022]
Abstract
Abstract
The use of MRI radiomic models for the diagnosis, prognosis and treatment response prediction of tumors has been increasingly reported in literature. However, its widespread adoption in clinics is hampered by issues related to features stability. In the MRI radiomic workflow, the main factors that affect radiomic features computation can be found in the image acquisition and reconstruction phase, in the image pre-processing steps, and in the segmentation of the region of interest on which radiomic indices are extracted. Deep Neural Networks (DNNs), having shown their potentiality in the medical image processing and analysis field, can be seen as an attractive strategy to partially overcome the issues related to radiomic stability and mitigate their impact. In fact, DNN approaches can be prospectively integrated in the MRI radiomic workflow to improve image quality, obtain accurate and reproducible segmentations and generate standardized images. In this review, DNN methods that can be included in the image processing steps of the radiomic workflow are described and discussed, in the light of a detailed analysis of the literature in the context of MRI radiomic reliability.
Collapse
|
97
|
Zhang C, Moeller S, Demirel OB, Uğurbil K, Akçakaya M. Residual RAKI: A hybrid linear and non-linear approach for scan-specific k-space deep learning. Neuroimage 2022; 256:119248. [PMID: 35487456 PMCID: PMC9179026 DOI: 10.1016/j.neuroimage.2022.119248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/07/2022] [Accepted: 04/23/2022] [Indexed: 10/31/2022] Open
Abstract
Parallel imaging is the most clinically used acceleration technique for magnetic resonance imaging (MRI) in part due to its easy inclusion into routine acquisitions. In k-space based parallel imaging reconstruction, sub-sampled k-space data are interpolated using linear convolutions. At high acceleration rates these methods have inherent noise amplification and reduced image quality. On the other hand, non-linear deep learning methods provide improved image quality at high acceleration, but the availability of training databases for different scans, as well as their interpretability hinder their adaptation. In this work, we present an extension of Robust Artificial-neural-networks for k-space Interpolation (RAKI), called residual-RAKI (rRAKI), which achieves scan-specific machine learning reconstruction using a hybrid linear and non-linear methodology. In rRAKI, non-linear CNNs are trained jointly with a linear convolution implemented via a skip connection. In effect, the linear part provides a baseline reconstruction, while the non-linear CNN that runs in parallel provides further reduction of artifacts and noise arising from the linear part. The explicit split between the linear and non-linear aspects of the reconstruction also help improve interpretability compared to purely non-linear methods. Experiments were conducted on the publicly available fastMRI datasets, as well as high-resolution anatomical imaging, comparing GRAPPA and its variants, compressed sensing, RAKI, Scan Specific Artifact Reduction in K-space (SPARK) and the proposed rRAKI. Additionally, highly-accelerated simultaneous multi-slice (SMS) functional MRI reconstructions were also performed, where the proposed rRAKI was compred to Read-out SENSE-GRAPPA and RAKI. Our results show that the proposed rRAKI method substantially improves the image quality compared to conventional parallel imaging, and offers sharper images compared to SPARK and ℓ1-SPIRiT. Furthermore, rRAKI shows improved preservation of time-varying dynamics compared to both parallel imaging and RAKI in highly-accelerated SMS fMRI.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
98
|
Yin XX, Sun L, Fu Y, Lu R, Zhang Y. U-Net-Based Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4189781. [PMID: 35463660 PMCID: PMC9033381 DOI: 10.1155/2022/4189781] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 03/02/2022] [Accepted: 03/23/2022] [Indexed: 11/17/2022]
Abstract
Deep learning has been extensively applied to segmentation in medical imaging. U-Net proposed in 2015 shows the advantages of accurate segmentation of small targets and its scalable network architecture. With the increasing requirements for the performance of segmentation in medical imaging in recent years, U-Net has been cited academically more than 2500 times. Many scholars have been constantly developing the U-Net architecture. This paper summarizes the medical image segmentation technologies based on the U-Net structure variants concerning their structure, innovation, efficiency, etc.; reviews and categorizes the related methodology; and introduces the loss functions, evaluation parameters, and modules commonly applied to segmentation in medical imaging, which will provide a good reference for the future research.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
- College of Engineering and Science, Victoria University, Melbourne, VIC 8001, Australia
| | - Le Sun
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yuhan Fu
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| | - Ruiliang Lu
- Department of Radiology, The First People's Hospital of Foshan, Foshan 528000, China
| | - Yanchun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
99
|
Clever Hans effect found in a widely used brain tumour MRI dataset. Med Image Anal 2022; 77:102368. [DOI: 10.1016/j.media.2022.102368] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 12/19/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
|
100
|
What Is Needed for Artificial Intelligence to Be Trusted? Am J Med 2022; 135:421-423. [PMID: 34861193 DOI: 10.1016/j.amjmed.2021.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/24/2022]
|