51
|
Hathaway QA, Hogg JP, Lakhani DA. Need for Medical Student Education in Emerging Technologies and Artificial Intelligence: Fostering Enthusiasm, Rather Than Flight, From Specialties Most Affected by Emerging Technologies. Acad Radiol 2023; 30:1770-1771. [PMID: 36464546 DOI: 10.1016/j.acra.2022.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 11/14/2022] [Indexed: 12/03/2022]
Affiliation(s)
- Quincy A Hathaway
- School of Medicine, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA
| | - Jeffery P Hogg
- School of Medicine, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA; Department of Radiology, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA
| | - Dhairya A Lakhani
- Department of Radiology, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA.
| |
Collapse
|
52
|
Ueda D, Matsumoto T, Ehara S, Yamamoto A, Walston SL, Ito A, Shimono T, Shiba M, Takeshita T, Fukuda D, Miki Y. Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study. Lancet Digit Health 2023:S2589-7500(23)00107-3. [PMID: 37422342 DOI: 10.1016/s2589-7500(23)00107-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 04/04/2023] [Accepted: 05/23/2023] [Indexed: 07/10/2023]
Abstract
BACKGROUND Chest radiography is a common and widely available examination. Although cardiovascular structures-such as cardiac shadows and vessels-are visible on chest radiographs, the ability of these radiographs to estimate cardiac function and valvular disease is poorly understood. Using datasets from multiple institutions, we aimed to develop and validate a deep-learning model to simultaneously detect valvular disease and cardiac functions from chest radiographs. METHODS In this model development and validation study, we trained, validated, and externally tested a deep learning-based model to classify left ventricular ejection fraction, tricuspid regurgitant velocity, mitral regurgitation, aortic stenosis, aortic regurgitation, mitral stenosis, tricuspid regurgitation, pulmonary regurgitation, and inferior vena cava dilation from chest radiographs. The chest radiographs and associated echocardiograms were collected from four institutions between April 1, 2013, and Dec 31, 2021: we used data from three sites (Osaka Metropolitan University Hospital, Osaka, Japan; Habikino Medical Center, Habikino, Japan; and Morimoto Hospital, Osaka, Japan) for training, validation, and internal testing, and data from one site (Kashiwara Municipal Hospital, Kashiwara, Japan) for external testing. We evaluated the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. FINDINGS We included 22 551 radiographs associated with 22 551 echocardiograms obtained from 16 946 patients. The external test dataset featured 3311 radiographs from 2617 patients with a mean age of 72 years [SD 15], of whom 49·8% were male and 50·2% were female. The AUCs, accuracy, sensitivity, and specificity for this dataset were 0·92 (95% CI 0·90-0·95), 86% (85-87), 82% (75-87), and 86% (85-88) for classifying the left ventricular ejection fraction at a 40% cutoff, 0·85 (0·83-0·87), 75% (73-76), 83% (80-87), and 73% (71-75) for classifying the tricuspid regurgitant velocity at a 2·8 m/s cutoff, 0·89 (0·86-0·92), 85% (84-86), 82% (76-87), and 85% (84-86) for classifying mitral regurgitation at the none-mild versus moderate-severe cutoff, 0·83 (0·78-0·88), 73% (71-74), 79% (69-87), and 72% (71-74) for classifying aortic stenosis, 0·83 (0·79-0·87), 68% (67-70), 88% (81-92), and 67% (66-69) for classifying aortic regurgitation, 0·86 (0·67-1·00), 90% (89-91), 83% (36-100), and 90% (89-91) for classifying mitral stenosis, 0·92 (0·89-0·94), 83% (82-85), 87% (83-91), and 83% (82-84) for classifying tricuspid regurgitation, 0·86 (0·82-0·90), 69% (68-71), 91% (84-95), and 68% (67-70) for classifying pulmonary regurgitation, and 0·85 (0·81-0·89), 86% (85-88), 73% (65-81), and 87% (86-88) for classifying inferior vena cava dilation. INTERPRETATION The deep learning-based model can accurately classify cardiac functions and valvular heart diseases using information from digital chest radiographs. This model can classify values typically obtained from echocardiography in a fraction of the time, with low system requirements and the potential to be continuously available in areas where echocardiography specialists are scarce or absent. FUNDING None.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Shoichi Ehara
- Department of Intensive Care Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Asahiro Ito
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Masatsugu Shiba
- Department of Biofunctional Analysis, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Tohru Takeshita
- Department of Radiology, Osaka Habikino Medical Center, Habikino, Japan
| | - Daiju Fukuda
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
53
|
Calimano-Ramirez LF, Virarkar MK, Hernandez M, Ozdemir S, Kumar S, Gopireddy DR, Lall C, Balaji KC, Mete M, Gumus KZ. MRI-based nomograms and radiomics in presurgical prediction of extraprostatic extension in prostate cancer: a systematic review. Abdom Radiol (NY) 2023; 48:2379-2400. [PMID: 37142824 DOI: 10.1007/s00261-023-03924-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 04/13/2023] [Accepted: 04/18/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE Prediction of extraprostatic extension (EPE) is essential for accurate surgical planning in prostate cancer (PCa). Radiomics based on magnetic resonance imaging (MRI) has shown potential to predict EPE. We aimed to evaluate studies proposing MRI-based nomograms and radiomics for EPE prediction and assess the quality of current radiomics literature. METHODS We used PubMed, EMBASE, and SCOPUS databases to find related articles using synonyms for MRI radiomics and nomograms to predict EPE. Two co-authors scored the quality of radiomics literature using the Radiomics Quality Score (RQS). Inter-rater agreement was measured using the intraclass correlation coefficient (ICC) from total RQS scores. We analyzed the characteristic s of the studies and used ANOVAs to associate the area under the curve (AUC) to sample size, clinical and imaging variables, and RQS scores. RESULTS We identified 33 studies-22 nomograms and 11 radiomics analyses. The mean AUC for nomogram articles was 0.783, and no significant associations were found between AUC and sample size, clinical variables, or number of imaging variables. For radiomics articles, there were significant associations between number of lesions and AUC (p < 0.013). The average RQS total score was 15.91/36 (44%). Through the radiomics operation, segmentation of region-of-interest, selection of features, and model building resulted in a broader range of results. The qualities the studies lacked most were phantom tests for scanner variabilities, temporal variability, external validation datasets, prospective designs, cost-effectiveness analysis, and open science. CONCLUSION Utilizing MRI-based radiomics to predict EPE in PCa patients demonstrates promising outcomes. However, quality improvement and standardization of radiomics workflow are needed.
Collapse
Affiliation(s)
- Luis F Calimano-Ramirez
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Mayur K Virarkar
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Mauricio Hernandez
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Savas Ozdemir
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Sindhu Kumar
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Dheeraj R Gopireddy
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Chandana Lall
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - K C Balaji
- Department of Urology, University of Florida College of Medicine, Jacksonville, FL, 32209, USA
| | - Mutlu Mete
- Department of Computer Science and Information System, Texas A&M University-Commerce, Commerce, TX, 75428, USA
| | - Kazim Z Gumus
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA.
| |
Collapse
|
54
|
Schiebler ML, Glide-Hurst C. Synthetic Images Are Here to Stay. Radiology 2023; 308:e231098. [PMID: 37404147 PMCID: PMC10374936 DOI: 10.1148/radiol.231098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 07/06/2023]
Affiliation(s)
- Mark L. Schiebler
- From the Department of Radiology (M.L.S.) and Department of Human Oncology and Medical Physics (C.G.H.), University of Wisconsin, 600 Highland Ave, E3/378 Clinical Science Center, Madison, WI 53792
| | - Carri Glide-Hurst
- From the Department of Radiology (M.L.S.) and Department of Human Oncology and Medical Physics (C.G.H.), University of Wisconsin, 600 Highland Ave, E3/378 Clinical Science Center, Madison, WI 53792
| |
Collapse
|
55
|
Kazmierski M, Welch M, Kim S, McIntosh C, Rey-McIntyre K, Huang SH, Patel T, Tadic T, Milosevic M, Liu FF, Ryczkowski A, Kazmierska J, Ye Z, Plana D, Aerts HJ, Kann BH, Bratman SV, Hope AJ, Haibe-Kains B. Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learning and Radiomics. CANCER RESEARCH COMMUNICATIONS 2023; 3:1140-1151. [PMID: 37397861 PMCID: PMC10309070 DOI: 10.1158/2767-9764.crc-22-0152] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 11/14/2022] [Accepted: 05/19/2023] [Indexed: 07/04/2023]
Abstract
Artificial intelligence (AI) and machine learning (ML) are becoming critical in developing and deploying personalized medicine and targeted clinical trials. Recent advances in ML have enabled the integration of wider ranges of data including both medical records and imaging (radiomics). However, the development of prognostic models is complex as no modeling strategy is universally superior to others and validation of developed models requires large and diverse datasets to demonstrate that prognostic models developed (regardless of method) from one dataset are applicable to other datasets both internally and externally. Using a retrospective dataset of 2,552 patients from a single institution and a strict evaluation framework that included external validation on three external patient cohorts (873 patients), we crowdsourced the development of ML models to predict overall survival in head and neck cancer (HNC) using electronic medical records (EMR) and pretreatment radiological images. To assess the relative contributions of radiomics in predicting HNC prognosis, we compared 12 different models using imaging and/or EMR data. The model with the highest accuracy used multitask learning on clinical data and tumor volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction, outperforming models relying on clinical data only, engineered radiomics, or complex deep neural network architecture. However, when we attempted to extend the best performing models from this large training dataset to other institutions, we observed significant reductions in the performance of the model in those datasets, highlighting the importance of detailed population-based reporting for AI/ML model utility and stronger validation frameworks. We have developed highly prognostic models for overall survival in HNC using EMRs and pretreatment radiological images based on a large, retrospective dataset of 2,552 patients from our institution.Diverse ML approaches were used by independent investigators. The model with the highest accuracy used multitask learning on clinical data and tumor volume.External validation of the top three performing models on three datasets (873 patients) with significant differences in the distributions of clinical and demographic variables demonstrated significant decreases in model performance. Significance ML combined with simple prognostic factors outperformed multiple advanced CT radiomics and deep learning methods. ML models provided diverse solutions for prognosis of patients with HNC but their prognostic value is affected by differences in patient populations and require extensive validation.
Collapse
Affiliation(s)
- Michal Kazmierski
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Mattea Welch
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- TECHNA Institute, Toronto, Ontario, Canada
| | - Sejin Kim
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Chris McIntosh
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Katrina Rey-McIntyre
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Shao Hui Huang
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Tirth Patel
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Tony Tadic
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Michael Milosevic
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Fei-Fei Liu
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Adam Ryczkowski
- Department of Medical Physics, Greater Poland Cancer Centre, Poznan, Poland
- Department of Electroradiology, University of Medical Sciences, Poznan, Poland
| | - Joanna Kazmierska
- Department of Electroradiology, University of Medical Sciences, Poznan, Poland
- Department of Radiotherapy II, Greater Poland Cancer Centre, Poznan, Poland
| | - Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Deborah Plana
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
- Radiology and Nuclear Medicine, CARIM and GROW, Maastricht University, Maastricht, the Netherlands
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Scott V. Bratman
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Andrew J. Hope
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Benjamin Haibe-Kains
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| |
Collapse
|
56
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
57
|
Kalra S, Wen J, Cresswell JC, Volkovs M, Tizhoosh HR. Decentralized federated learning through proxy model sharing. Nat Commun 2023; 14:2899. [PMID: 37217476 DOI: 10.1038/s41467-023-38569-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 05/08/2023] [Indexed: 05/24/2023] Open
Abstract
Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing. Federated learning is a distributed learning framework that enables multi-institutional collaborations on decentralized data with improved protection for each collaborator's data privacy. In this paper, we propose a communication-efficient scheme for decentralized federated learning called ProxyFL, or proxy-based federated learning. Each participant in ProxyFL maintains two models, a private model, and a publicly shared proxy model designed to protect the participant's privacy. Proxy models allow efficient information exchange among participants without the need of a centralized server. The proposed method eliminates a significant limitation of canonical federated learning by allowing model heterogeneity; each participant can have a private model with any architecture. Furthermore, our protocol for communication by proxy leads to stronger privacy guarantees using differential privacy analysis. Experiments on popular image datasets, and a cancer diagnostic problem using high-quality gigapixel histology whole slide images, show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.
Collapse
Affiliation(s)
- Shivam Kalra
- Layer 6 AI, Toronto, ON, Canada
- Kimia Lab, University of Waterloo, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Junfeng Wen
- Carleton University, School of Computer Science, Ottawa, ON, Canada
| | | | | | - H R Tizhoosh
- Kimia Lab, University of Waterloo, Toronto, ON, Canada.
- Vector Institute, Toronto, ON, Canada.
- Rhazes Lab, Dept. of AI & Informatics, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
58
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
59
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
60
|
Yamada A, Kamagata K, Hirata K, Ito R, Nakaura T, Ueda D, Fujita S, Fushimi Y, Fujima N, Matsui Y, Tatsugami F, Nozaki T, Fujioka T, Yanagawa M, Tsuboyama T, Kawamura M, Naganawa S. Clinical applications of artificial intelligence in liver imaging. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01638-1. [PMID: 37165151 DOI: 10.1007/s11547-023-01638-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 04/21/2023] [Indexed: 05/12/2023]
Abstract
This review outlines the current status and challenges of the clinical applications of artificial intelligence in liver imaging using computed tomography or magnetic resonance imaging based on a topic analysis of PubMed search results using latent Dirichlet allocation. LDA revealed that "segmentation," "hepatocellular carcinoma and radiomics," "metastasis," "fibrosis," and "reconstruction" were current main topic keywords. Automatic liver segmentation technology using deep learning is beginning to assume new clinical significance as part of whole-body composition analysis. It has also been applied to the screening of large populations and the acquisition of training data for machine learning models and has resulted in the development of imaging biomarkers that have a significant impact on important clinical issues, such as the estimation of liver fibrosis, recurrence, and prognosis of malignant tumors. Deep learning reconstruction is expanding as a new technological clinical application of artificial intelligence and has shown results in reducing contrast and radiation doses. However, there is much missing evidence, such as external validation of machine learning models and the evaluation of the diagnostic performance of specific diseases using deep learning reconstruction, suggesting that the clinical application of these technologies is still in development.
Collapse
Affiliation(s)
- Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan.
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-Ku, Tokyo, Japan
| | - Kenji Hirata
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-Ku, Kumamoto, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Abeno-Ku, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-Ku, Okayama, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-Ku, Hiroshima City, Hiroshima, Japan
| | - Taiki Nozaki
- Department of Radiology, St. Luke's International Hospital, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
61
|
Neri E, Aghakhanyan G, Zerunian M, Gandolfo N, Grassi R, Miele V, Giovagnoni A, Laghi A. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01634-5. [PMID: 37155000 DOI: 10.1007/s11547-023-01634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/19/2023] [Indexed: 05/10/2023]
Abstract
The term Explainable Artificial Intelligence (xAI) groups together the scientific body of knowledge developed while searching for methods to explain the inner logic behind the AI algorithm and the model inference based on knowledge-based interpretability. The xAI is now generally recognized as a core area of AI. A variety of xAI methods currently are available to researchers; nonetheless, the comprehensive classification of the xAI methods is still lacking. In addition, there is no consensus among the researchers with regards to what an explanation exactly is and which are salient properties that must be considered to make it understandable for every end-user. The SIRM introduces an xAI-white paper, which is intended to aid Radiologists, medical practitioners, and scientists in the understanding an emerging field of xAI, the black-box problem behind the success of the AI, the xAI methods to unveil the black-box into a glass-box, the role, and responsibilities of the Radiologists for appropriate use of the AI-technology. Due to the rapidly changing and evolution of AI, a definitive conclusion or solution is far away from being defined. However, one of our greatest responsibilities is to keep up with the change in a critical manner. In fact, ignoring and discrediting the advent of AI a priori will not curb its use but could result in its application without awareness. Therefore, learning and increasing our knowledge about this very important technological change will allow us to put AI at our service and at the service of the patients in a conscious way, pushing this paradigm shift as far as it will benefit us.
Collapse
Affiliation(s)
- Emanuele Neri
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy
| | - Gayane Aghakhanyan
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy.
| | - Marta Zerunian
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| | - Nicoletta Gandolfo
- Diagnostic Imaging Department, VillaScassi Hospital-ASL 3, Corso Scassi 1, Genoa, Italy
| | - Roberto Grassi
- Radiology Unit, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Andrea Giovagnoni
- Department of Radiological Sciences, Radiology Clinic, Azienda Ospedaliera Universitaria, Ospedali Riuniti Di Ancona, Ancona, Italy
| | - Andrea Laghi
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| |
Collapse
|
62
|
Chapiro J. Explainable AI for Prostate MRI: Don't Trust, Verify. Radiology 2023; 307:e230574. [PMID: 37039689 PMCID: PMC10323286 DOI: 10.1148/radiol.230574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 03/10/2023] [Accepted: 03/14/2023] [Indexed: 04/12/2023]
Affiliation(s)
- Julius Chapiro
- From the Department of Radiology and Biomedical Imaging, Yale
University School of Medicine, 789 Howard Ave, CB363H, New Haven, CT
06519
| |
Collapse
|
63
|
Tanguay W, Acar P, Fine B, Abdolell M, Gong B, Cadrin-Chênevert A, Chartrand-Lefebvre C, Chalaoui J, Gorgos A, Chin ASL, Prénovault J, Guilbert F, Létourneau-Guillon L, Chong J, Tang A. Assessment of Radiology Artificial Intelligence Software: A Validation and Evaluation Framework. Can Assoc Radiol J 2023; 74:326-333. [PMID: 36341574 DOI: 10.1177/08465371221135760] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Artificial intelligence (AI) software in radiology is becoming increasingly prevalent and performance is improving rapidly with new applications for given use cases being developed continuously, oftentimes with development and validation occurring in parallel. Several guidelines have provided reporting standards for publications of AI-based research in medicine and radiology. Yet, there is an unmet need for recommendations on the assessment of AI software before adoption and after commercialization. As the radiology AI ecosystem continues to grow and mature, a formalization of system assessment and evaluation is paramount to ensure patient safety, relevance and support to clinical workflows, and optimal allocation of limited AI development and validation resources before broader implementation into clinical practice. To fulfil these needs, we provide a glossary for AI software types, use cases and roles within the clinical workflow; list healthcare needs, key performance indicators and required information about software prior to assessment; and lay out examples of software performance metrics per software category. This conceptual framework is intended to streamline communication with the AI software industry and provide healthcare decision makers and radiologists with tools to assess the potential use of these software. The proposed software evaluation framework lays the foundation for a radiologist-led prospective validation network of radiology AI software. Learning Points: The rapid expansion of AI applications in radiology requires standardization of AI software specification, classification, and evaluation. The Canadian Association of Radiologists' AI Tech & Apps Working Group Proposes an AI Specification document format and supports the implementation of a clinical expert evaluation process for Radiology AI software.
Collapse
Affiliation(s)
- William Tanguay
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Philippe Acar
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Benjamin Fine
- Department of Diagnostic Imaging, 5543Trillium Health Partners, Mississauga, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Mohamed Abdolell
- Department of Radiology, Dalhousie University, Halifax, NS, Canada
| | - Bo Gong
- Department of Radiology, Vancouver General Hospital, University of British Columbia, Vancouver, BC, Canada
| | | | - Carl Chartrand-Lefebvre
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Jean Chalaoui
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Andrei Gorgos
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Anne Shu-Lei Chin
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Julie Prénovault
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - François Guilbert
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Laurent Létourneau-Guillon
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| | - Jaron Chong
- Department of Medical Imaging, Western University, London, ON, Canada
| | - An Tang
- 60352Centre hospitalier de l'Université de Montréal, Montréal, QC, Canada
- Department of Radiology, Radiation Oncology and Nuclear Medicine, 25443Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
64
|
Pham N, Hill V, Rauschecker A, Lui Y, Niogi S, Fillipi CG, Chang P, Zaharchuk G, Wintermark M. Critical Appraisal of Artificial Intelligence-Enabled Imaging Tools Using the Levels of Evidence System. AJNR Am J Neuroradiol 2023; 44:E21-E28. [PMID: 37080722 PMCID: PMC10171388 DOI: 10.3174/ajnr.a7850] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/16/2023] [Indexed: 04/22/2023]
Abstract
Clinical adoption of an artificial intelligence-enabled imaging tool requires critical appraisal of its life cycle from development to implementation by using a systematic, standardized, and objective approach that can verify both its technical and clinical efficacy. Toward this concerted effort, the ASFNR/ASNR Artificial Intelligence Workshop Technology Working Group is proposing a hierarchal evaluation system based on the quality, type, and amount of scientific evidence that the artificial intelligence-enabled tool can demonstrate for each component of its life cycle. The current proposal is modeled after the levels of evidence in medicine, with the uppermost level of the hierarchy showing the strongest evidence for potential impact on patient care and health care outcomes. The intended goal of establishing an evidence-based evaluation system is to encourage transparency, foster an understanding of the creation of artificial intelligence tools and the artificial intelligence decision-making process, and to report the relevant data on the efficacy of artificial intelligence tools that are developed. The proposed system is an essential step in working toward a more formalized, clinically validated, and regulated framework for the safe and effective deployment of artificial intelligence imaging applications that will be used in clinical practice.
Collapse
Affiliation(s)
- N Pham
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - V Hill
- Department of Radiology (V.H.), Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - A Rauschecker
- Department of Radiology (A.R.), University of California, San Francisco, San Francisco, California
| | - Y Lui
- Department of Radiology (Y.L.), NYU Grossman School of Medicine, New York, New York
| | - S Niogi
- Department of Radiology (S.N.), Weill Cornell Medicine, New York, New York
| | - C G Fillipi
- Department of Radiology (C.G.F.), Tufts University School of Medicine, Boston, Massachusetts
| | - P Chang
- Department of Radiology (P.C.), University of California, Irvine, Irvine, California
| | - G Zaharchuk
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - M Wintermark
- Department of Neuroradiology (M.W.), The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
65
|
Yoshida K, Tanabe Y, Nishiyama H, Matsuda T, Toritani H, Kitamura T, Sakai S, Watamori K, Takao M, Kimura E, Kido T. Feasibility of Bone Mineral Density and Bone Microarchitecture Assessment Using Deep Learning With a Convolutional Neural Network. J Comput Assist Tomogr 2023; 47:467-474. [PMID: 37185012 PMCID: PMC10184800 DOI: 10.1097/rct.0000000000001437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVES We evaluated the feasibility of using deep learning with a convolutional neural network for predicting bone mineral density (BMD) and bone microarchitecture from conventional computed tomography (CT) images acquired by multivendor scanners. METHODS We enrolled 402 patients who underwent noncontrast CT examinations, including L1-L4 vertebrae, and dual-energy x-ray absorptiometry (DXA) examination. Among these, 280 patients (3360 sagittal vertebral images), 70 patients (280 sagittal vertebral images), and 52 patients (208 sagittal vertebral images) were assigned to the training data set for deep learning model development, the validation, and the test data set, respectively. Bone mineral density and the trabecular bone score (TBS), an index of bone microarchitecture, were assessed by DXA. BMDDL and TBSDL were predicted by deep learning with a convolutional neural network (ResNet50). Pearson correlation tests assessed the correlation between BMDDL and BMD, and TBSDL and TBS. The diagnostic performance of BMDDL for osteopenia/osteoporosis and that of TBSDL for bone microarchitecture impairment were evaluated using receiver operating characteristic curve analysis. RESULTS BMDDL and BMD correlated strongly (r = 0.81, P < 0.01), whereas TBSDL and TBS correlated moderately (r = 0.54, P < 0.01). The sensitivity and specificity of BMDDL for identifying osteopenia or osteoporosis were 93% and 90%, and 100% and 94%, respectively. The sensitivity and specificity of TBSDL for identifying patients with bone microarchitecture impairment were 73% for all values. CONCLUSIONS The BMDDL and TBSDL derived from conventional CT images could identify patients who should undergo DXA, which could be a gatekeeper tool for detecting latent osteoporosis/osteopenia or bone microarchitecture impairment.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Shinichiro Sakai
- Orthopedic Surgery, Ehime University Graduate School of Medicine
| | | | - Masaki Takao
- Orthopedic Surgery, Ehime University Graduate School of Medicine
| | | | | |
Collapse
|
66
|
Zhang X, Miao J, Yang J, Liu C, Huang J, Song J, Xie D, Yue C, Kong W, Hu J, Luo W, Liu S, Li F, Zi W. DWI-Based Radiomics Predicts the Functional Outcome of Endovascular Treatment in Acute Basilar Artery Occlusion. AJNR Am J Neuroradiol 2023; 44:536-542. [PMID: 37080720 PMCID: PMC10171394 DOI: 10.3174/ajnr.a7851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 03/15/2023] [Indexed: 04/22/2023]
Abstract
BACKGROUND AND PURPOSE Endovascular treatment is a reference treatment for acute basilar artery occlusion (ABAO). However, no established and specific methods are available for the preoperative screening of patients with ABAO suitable for endovascular treatment. This study explores the potential value of DWI-based radiomics in predicting the functional outcomes of endovascular treatment in ABAO. MATERIALS AND METHODS Patients with ABAO treated with endovascular treatment from the BASILAR registry (91 patients in the training cohort) and the hospitals in the Northwest of China (31 patients for the external testing cohort) were included in this study. The Mann-Whitney U test, random forests algorithm, and least absolute shrinkage and selection operator were used to reduce the feature dimension. A machine learning model was developed on the basis of the training cohort to predict the prognosis of endovascular treatment. The performance of the model was evaluated on the independent external testing cohort. RESULTS A subset of radiomics features (n = 6) was used to predict the functional outcomes in patients with ABAO. The areas under the receiver operating characteristic curve of the radiomics model were 0.870 and 0.781 in the training cohort and testing cohort, respectively. The accuracy of the radiomics model was 77.4%, with a sensitivity of 78.9%, specificity of 75%, positive predictive value of 83.3%, and negative predictive value of 69.2% in the testing cohort. CONCLUSIONS DWI-based radiomics can predict the prognosis of endovascular treatment in patients with ABAO, hence allowing a potentially better selection of patients who are most likely to benefit from this treatment.
Collapse
Affiliation(s)
- X Zhang
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
- Department of Neurology (X.Z.), The Affiliated Hospital of Northwest University Xi'an No.3 Hospital, Xian, China
| | - J Miao
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
- Department of Neurology (J.M.), Xianyang Hospital of Yan'an University, Xianyang, China
| | - J Yang
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - C Liu
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - J Huang
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - J Song
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - D Xie
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - C Yue
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - W Kong
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - J Hu
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - W Luo
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - S Liu
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - F Li
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - W Zi
- From the Department of Neurology (X.Z., J.M., J.Y., C.L., J.H., J.S., D.X., C.Y., W.K., J.H., W.L., S.L., F.L., W.Z.), Xinqiao Hospital and The Second Affiliated Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| |
Collapse
|
67
|
Lee SB, Hong Y, Cho YJ, Jeong D, Lee J, Yoon SH, Lee S, Choi YH, Cheon JE. Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation. Korean J Radiol 2023; 24:294-304. [PMID: 36907592 PMCID: PMC10067697 DOI: 10.3348/kjr.2022.0588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 12/19/2022] [Accepted: 01/24/2023] [Indexed: 03/14/2023] Open
Abstract
OBJECTIVE We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. MATERIALS AND METHODS We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. RESULTS The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). CONCLUSION Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.
Collapse
Affiliation(s)
- Seul Bi Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Youngtaek Hong
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul, Korea
| | - Yeon Jin Cho
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
| | - Dawun Jeong
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul, Korea.,Brain Korea, 21 PLUS Project for Medical Science, Yonsei University, Seoul, Korea
| | - Jina Lee
- CONNECT-AI R&D Center, Yonsei University College of Medicine, Seoul, Korea.,Brain Korea, 21 PLUS Project for Medical Science, Yonsei University, Seoul, Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,MEDICALIP Co. Ltd., Seoul, Korea
| | - Seunghyun Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Young Hun Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Jung-Eun Cheon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| |
Collapse
|
68
|
Panico C, Avesani G, Zormpas-Petridis K, Rundo L, Nero C, Sala E. Radiomics and Radiogenomics of Ovarian Cancer. Radiol Clin North Am 2023; 61:749-760. [PMID: 37169435 DOI: 10.1016/j.rcl.2023.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
Ovarian cancer, one of the deadliest gynecologic malignancies, is characterized by high intra- and inter-site genomic and phenotypic heterogeneity. The traditional information provided by the conventional interpretation of diagnostic imaging studies cannot adequately represent this heterogeneity. Radiomics analyses can capture the complex patterns related to the microstructure of the tissues and provide quantitative information about them. This review outlines how radiomics and its integration with other quantitative biological information, like genomics and proteomics, can impact the clinical management of ovarian cancer.
Collapse
|
69
|
Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI). Eur Radiol 2023:10.1007/s00330-023-09409-2. [PMID: 36729173 PMCID: PMC9892666 DOI: 10.1007/s00330-023-09409-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 11/29/2022] [Accepted: 12/27/2022] [Indexed: 02/03/2023]
Abstract
This statement from the European Society of Thoracic imaging (ESTI) explains and summarises the essentials for understanding and implementing Artificial intelligence (AI) in clinical practice in thoracic radiology departments. This document discusses the current AI scientific evidence in thoracic imaging, its potential clinical utility, implementation and costs, training requirements and validation, its' effect on the training of new radiologists, post-implementation issues, and medico-legal and ethical issues. All these issues have to be addressed and overcome, for AI to become implemented clinically in thoracic radiology. KEY POINTS: • Assessing the datasets used for training and validation of the AI system is essential. • A departmental strategy and business plan which includes continuing quality assurance of AI system and a sustainable financial plan is important for successful implementation. • Awareness of the negative effect on training of new radiologists is vital.
Collapse
|
70
|
Hsia CCW, Bates JHT, Driehuys B, Fain SB, Goldin JG, Hoffman EA, Hogg JC, Levin DL, Lynch DA, Ochs M, Parraga G, Prisk GK, Smith BM, Tawhai M, Vidal Melo MF, Woods JC, Hopkins SR. Quantitative Imaging Metrics for the Assessment of Pulmonary Pathophysiology: An Official American Thoracic Society and Fleischner Society Joint Workshop Report. Ann Am Thorac Soc 2023; 20:161-195. [PMID: 36723475 PMCID: PMC9989862 DOI: 10.1513/annalsats.202211-915st] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Multiple thoracic imaging modalities have been developed to link structure to function in the diagnosis and monitoring of lung disease. Volumetric computed tomography (CT) renders three-dimensional maps of lung structures and may be combined with positron emission tomography (PET) to obtain dynamic physiological data. Magnetic resonance imaging (MRI) using ultrashort-echo time (UTE) sequences has improved signal detection from lung parenchyma; contrast agents are used to deduce airway function, ventilation-perfusion-diffusion, and mechanics. Proton MRI can measure regional ventilation-perfusion ratio. Quantitative imaging (QI)-derived endpoints have been developed to identify structure-function phenotypes, including air-blood-tissue volume partition, bronchovascular remodeling, emphysema, fibrosis, and textural patterns indicating architectural alteration. Coregistered landmarks on paired images obtained at different lung volumes are used to infer airway caliber, air trapping, gas and blood transport, compliance, and deformation. This document summarizes fundamental "good practice" stereological principles in QI study design and analysis; evaluates technical capabilities and limitations of common imaging modalities; and assesses major QI endpoints regarding underlying assumptions and limitations, ability to detect and stratify heterogeneous, overlapping pathophysiology, and monitor disease progression and therapeutic response, correlated with and complementary to, functional indices. The goal is to promote unbiased quantification and interpretation of in vivo imaging data, compare metrics obtained using different QI modalities to ensure accurate and reproducible metric derivation, and avoid misrepresentation of inferred physiological processes. The role of imaging-based computational modeling in advancing these goals is emphasized. Fundamental principles outlined herein are critical for all forms of QI irrespective of acquisition modality or disease entity.
Collapse
|
71
|
Evaluación metodológica de las revisiones sistemáticas basadas en la utilización de sistemas de inteligencia artificial en radiografía de tórax. RADIOLOGIA 2023. [DOI: 10.1016/j.rx.2023.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
|
72
|
Mohaideen K, Negi A, Sennimalai K. Clarifications regarding convolutional neural networks-based automatic segmentation of pharyngeal airway sections. Am J Orthod Dentofacial Orthop 2023; 163:143. [PMID: 36710055 DOI: 10.1016/j.ajodo.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 11/04/2022] [Indexed: 01/30/2023]
|
73
|
Hadjiiski L, Cha K, Chan HP, Drukker K, Morra L, Näppi JJ, Sahiner B, Yoshida H, Chen Q, Deserno TM, Greenspan H, Huisman H, Huo Z, Mazurchuk R, Petrick N, Regge D, Samala R, Summers RM, Suzuki K, Tourassi G, Vergara D, Armato SG. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023; 50:e1-e24. [PMID: 36565447 DOI: 10.1002/mp.16188] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/25/2022] Open
Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Collapse
Affiliation(s)
- Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Kenny Cha
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Berkman Sahiner
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Hayit Greenspan
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv, Israel & Department of Radiology, Ichan School of Medicine, Tel Aviv University, Mt Sinai, New York, New York, USA
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Zhimin Huo
- Tencent America, Palo Alto, California, USA
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | | | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy.,Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Ravi Samala
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Maryland, USA
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan
| | | | - Daniel Vergara
- Department of Radiology, Yale New Haven Hospital, New Haven, Connecticut, USA
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
74
|
Park SH, Han K, Jang HY, Park JE, Lee JG, Kim DW, Choi J. Methods for Clinical Evaluation of Artificial Intelligence Algorithms for Medical Diagnosis. Radiology 2023; 306:20-31. [PMID: 36346314 DOI: 10.1148/radiol.220182] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Adequate clinical evaluation of artificial intelligence (AI) algorithms before adoption in practice is critical. Clinical evaluation aims to confirm acceptable AI performance through adequate external testing and confirm the benefits of AI-assisted care compared with conventional care through appropriately designed and conducted studies, for which prospective studies are desirable. This article explains some of the fundamental methodological points that should be considered when designing and appraising the clinical evaluation of AI algorithms for medical diagnosis. The specific topics addressed include the following: (a) the importance of external testing of AI algorithms and strategies for conducting the external testing effectively, (b) the various metrics and graphical methods for evaluating the AI performance as well as essential methodological points to note in using and interpreting them, (c) paired study designs primarily for comparative performance evaluation of conventional and AI-assisted diagnoses, (d) parallel study designs primarily for evaluating the effect of AI intervention with an emphasis on randomized clinical trials, and (e) up-to-date guidelines for reporting clinical studies on AI, with an emphasis on guidelines registered in the EQUATOR Network library. Sound methodological knowledge of these topics will aid the design, execution, reporting, and appraisal of clinical evaluation of AI.
Collapse
Affiliation(s)
- Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Kyunghwa Han
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Hye Young Jang
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Ji Eun Park
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - June-Goo Lee
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Dong Wook Kim
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Jaesoon Choi
- From the Department of Radiology and Research Institute of Radiology (S.H.P., J.E.P., D.W.K.) and Department of Biomedical Engineering (J.C.), Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea; Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea (K.H.); Department of Radiology, National Cancer Center, Goyang, South Korea (H.Y.J.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| |
Collapse
|
75
|
Boeken T, Feydy J, Lecler A, Soyer P, Feydy A, Barat M, Duron L. Artificial intelligence in diagnostic and interventional radiology: Where are we now? Diagn Interv Imaging 2023; 104:1-5. [PMID: 36494290 DOI: 10.1016/j.diii.2022.11.004] [Citation(s) in RCA: 51] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/18/2022] [Indexed: 12/12/2022]
Abstract
The emergence of massively parallel yet affordable computing devices has been a game changer for research in the field of artificial intelligence (AI). In addition, dramatic investment from the web giants has fostered the development of a high-quality software stack. Going forward, the combination of faster computers with dedicated software libraries and the widespread availability of data has opened the door to more flexibility in the design of AI models. Radiomics is a process used to discover new imaging biomarkers that has multiple applications in radiology and can be used in conjunction with AI. AI can be used throughout the various processes of diagnostic imaging, including data acquisition, reconstruction, analysis and reporting. Today, the concept of "AI-augmented" radiologists is preferred to the theory of the replacement of radiologists by AI in many indications. Current evidence bolsters the assumption that AI-assisted radiologists work better and faster. Interventional radiology becomes a data-rich specialty where the entire procedure is fully recorded in a standardized DICOM format and accessible via standard picture archiving and communication systems. No other interventional specialty can bolster such readiness. In this setting, interventional radiology could lead the development of AI-powered applications in the broader interventional community. This article provides an update on the current status of radiomics and AI research, analyzes upcoming challenges and also discusses the main applications in AI in interventional radiology to help radiologists better understand and criticize articles reporting AI in medical imaging.
Collapse
Affiliation(s)
- Tom Boeken
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Vascular and Oncological Interventional Radiology, Hôpital Européen Georges Pompidou, APHP, Paris 75015, France; HeKA team, INRIA, Paris 75012 , France.
| | | | - Augustin Lecler
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Radiology, Rothschild Foundation Hospital, Paris 75019, France
| | - Philippe Soyer
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Radiology, Hôpital Cochin, APHP, Paris 75014, France
| | - Antoine Feydy
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Radiology, Hôpital Cochin, APHP, Paris 75014, France
| | - Maxime Barat
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Radiology, Hôpital Cochin, APHP, Paris 75014, France
| | - Loïc Duron
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Radiology, Rothschild Foundation Hospital, Paris 75019, France
| |
Collapse
|
76
|
Park SH. Looking Back at 2022 and ahead to 2023 for the Korean Journal of Radiology. Korean J Radiol 2023; 24:15-18. [PMID: 36606615 PMCID: PMC9830144 DOI: 10.3348/kjr.2022.0963] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 12/03/2022] [Indexed: 01/03/2023] Open
Affiliation(s)
- Seong Ho Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.
| |
Collapse
|
77
|
Radiomics and Deep Learning for Disease Detection in Musculoskeletal Radiology: An Overview of Novel MRI- and CT-Based Approaches. Invest Radiol 2023; 58:3-13. [PMID: 36070548 DOI: 10.1097/rli.0000000000000907] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
ABSTRACT Radiomics and machine learning-based methods offer exciting opportunities for improving diagnostic performance and efficiency in musculoskeletal radiology for various tasks, including acute injuries, chronic conditions, spinal abnormalities, and neoplasms. While early radiomics-based methods were often limited to a smaller number of higher-order image feature extractions, applying machine learning-based analytic models, multifactorial correlations, and classifiers now permits big data processing and testing thousands of features to identify relevant markers. A growing number of novel deep learning-based methods describe magnetic resonance imaging- and computed tomography-based algorithms for diagnosing anterior cruciate ligament tears, meniscus tears, articular cartilage defects, rotator cuff tears, fractures, metastatic skeletal disease, and soft tissue tumors. Initial radiomics and deep learning techniques have focused on binary detection tasks, such as determining the presence or absence of a single abnormality and differentiation of benign versus malignant. Newer-generation algorithms aim to include practically relevant multiclass characterization of detected abnormalities, such as typing and malignancy grading of neoplasms. So-called delta-radiomics assess tumor features before and after treatment, with temporal changes of radiomics features serving as surrogate markers for tumor responses to treatment. New approaches also predict treatment success rates, surgical resection completeness, and recurrence risk. Practice-relevant goals for the next generation of algorithms include diagnostic whole-organ and advanced classification capabilities. Important research objectives to fill current knowledge gaps include well-designed research studies to understand how diagnostic performances and suggested efficiency gains of isolated research settings translate into routine daily clinical practice. This article summarizes current radiomics- and machine learning-based magnetic resonance imaging and computed tomography approaches for musculoskeletal disease detection and offers a perspective on future goals and objectives.
Collapse
|
78
|
Wang Q, Xu J, Wang A, Chen Y, Wang T, Chen D, Zhang J, Brismar TB. Systematic review of machine learning-based radiomics approach for predicting microsatellite instability status in colorectal cancer. LA RADIOLOGIA MEDICA 2023; 128:136-148. [PMID: 36648615 PMCID: PMC9938810 DOI: 10.1007/s11547-023-01593-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/04/2023] [Indexed: 01/18/2023]
Abstract
This study aimed to systematically summarize the performance of the machine learning-based radiomics models in the prediction of microsatellite instability (MSI) in patients with colorectal cancer (CRC). It was conducted according to the preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA) guideline and was registered at the PROSPERO website with an identifier CRD42022295787. Systematic literature searching was conducted in databases of PubMed, Embase, Web of Science, and Cochrane Library up to November 10, 2022. Research which applied radiomics analysis on preoperative CT/MRI/PET-CT images for predicting the MSI status in CRC patients with no history of anti-tumor therapies was eligible. The radiomics quality score (RQS) and Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) were applied to evaluate the research quality (full score 100%). Twelve studies with 4,320 patients were included. All studies were retrospective, and only four had an external validation cohort. The median incidence of MSI was 19% (range 8-34%). The area under the receiver operator curve of the models ranged from 0.78 to 0.96 (median 0.83) in the external validation cohort. The median sensitivity was 0.76 (range 0.32-1.00), and the median specificity was 0.87 (range 0.69-1.00). The median RQS score was 38% (range 14-50%), and half of the studies showed high risk in patient selection as evaluated by QUADAS-2. In conclusion, while radiomics based on pretreatment imaging modalities had a high performance in the prediction of MSI status in CRC, so far it does not appear to be ready for clinical use due to insufficient methodological quality.
Collapse
Affiliation(s)
- Qiang Wang
- Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden. .,Department of Radiology, Karolinska University Hospital Huddinge, Room 601, Novum PI 6, Hiss F, Hälsovägen 7, 141 86, Huddinge, Stockholm, Sweden.
| | - Jianhua Xu
- Department of General Surgery, Songshan Hospital, Chongqing, China
| | - Anrong Wang
- grid.452206.70000 0004 1758 417XDepartment of Vascular Surgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China ,Department of Interventional Therapy, People’s Hospital of Dianjiang County, Chongqing, China
| | - Yi Chen
- grid.4714.60000 0004 1937 0626Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Tian Wang
- grid.517910.bDepartment of Gastroenterology, Chongqing General Hospital, Chongqing, China
| | - Danyu Chen
- grid.412536.70000 0004 1791 7851Department of Gastroenterology and Hepatology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jiaxing Zhang
- grid.459540.90000 0004 1791 4503Department of Pharmacy, Guizhou Provincial People’s Hospital, Guiyang, China
| | - Torkel B. Brismar
- grid.4714.60000 0004 1937 0626Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden ,grid.24381.3c0000 0000 9241 5705Department of Radiology, Karolinska University Hospital Huddinge, Room 601, Novum PI 6, Hiss F, Hälsovägen 7, 141 86 Huddinge, Stockholm, Sweden
| |
Collapse
|
79
|
Barat M, Gaillard M, Cottereau AS, Fishman EK, Assié G, Jouinot A, Hoeffel C, Soyer P, Dohan A. Artificial intelligence in adrenal imaging: A critical review of current applications. Diagn Interv Imaging 2023; 104:37-42. [PMID: 36163169 DOI: 10.1016/j.diii.2022.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 09/14/2022] [Indexed: 01/10/2023]
Abstract
In the elective field of adrenal imaging, artificial intelligence (AI) can be used for adrenal lesion detection, characterization, hypersecreting syndrome management and patient follow-up. Although a perfect AI tool that includes all required steps from detection to analysis does not exist yet, multiple AI algorithms have been developed and tested with encouraging results. However, AI in this setting is still at an early stage. In this regard, most published studies about AI in adrenal gland imaging report preliminary results that do not have yet daily applications in clinical practice. In this review, recent developments and current results of AI in the field of adrenal imaging are presented. Limitations and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Maxime Barat
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France; Université Paris Cité, Faculté de Médecine, Paris 75006, France.
| | - Martin Gaillard
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Digestive, Hepatobiliary and Pancreatic Surgery, Hôpital Cochin, AP-HP, Paris 75014, France
| | - Anne-Ségolène Cottereau
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Nuclear Medicine, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Guillaume Assié
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Endocrinology, Center for Rare Adrenal Diseases, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France
| | - Anne Jouinot
- Université Paris Cité, Faculté de Médecine, Paris 75006, France; Department of Endocrinology, Center for Rare Adrenal Diseases, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France
| | | | - Philippe Soyer
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France; Université Paris Cité, Faculté de Médecine, Paris 75006, France
| | - Anthony Dohan
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris 75014, France; Université Paris Cité, Faculté de Médecine, Paris 75006, France
| |
Collapse
|
80
|
Aaltonen HL, O'Reilly MK, Linnau KF, Dong Q, Johnston SK, Jarvik JG, Cross NM. m2ABQ-a proposed refinement of the modified algorithm-based qualitative classification of osteoporotic vertebral fractures. Osteoporos Int 2023; 34:137-145. [PMID: 36336755 PMCID: PMC10246552 DOI: 10.1007/s00198-022-06546-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 08/29/2022] [Indexed: 11/09/2022]
Abstract
Currently, there is no reproducible, widely accepted gold standard to classify osteoporotic vertebral body fractures (OVFs). The purpose of this study is to refine a method with clear rules to classify OVFs for machine learning purposes. The method was found to have moderate interobserver agreement that improved with training. INTRODUCTION The current methods to classify osteoporotic vertebral body fractures are considered ambiguous; there is no reproducible, accepted gold standard. The purpose of this study is to refine classification methodology by introducing clear, unambiguous rules and a refined flowchart to allow consistent classification of osteoporotic vertebral body fractures. METHODS We developed a set of rules and refinements that we called m2ABQ to classify vertebrae into five categories. A fracture-enriched database of thoracic and lumbar spine radiographs of patients 65 years of age and older was retrospectively obtained from clinical institutional radiology records using natural language processing. Five raters independently classified each vertebral body using the m2ABQ system. After each annotation round, consensus sessions that included all raters were held to discuss and finalize a consensus annotation for each vertebral body where individual raters' evaluations differed. This process led to further refinement and development of the rules. RESULTS Each annotation round showed increase in Fleiss kappa both for presence vs absence of fracture 0.62 (0.56-0.68) to 0.70 (0.65-0.75), as well as for the whole m2ABQ scale 0.29 (0.25-0.33) to 0.54 (0.51-0.58). CONCLUSION The m2ABQ system demonstrates moderate interobserver agreement and practical feasibility for classifying osteoporotic vertebral body fractures. Future studies to compare the method to existing studies are warranted, as well as further development of its use in machine learning purposes.
Collapse
Affiliation(s)
- H L Aaltonen
- Department of Radiology, University of Washington, Seattle, WA, USA.
- Department of Medical Imaging and Physiology, Lund University, Malmo, Sweden.
| | - M K O'Reilly
- Department of Radiology, University of Washington, Seattle, WA, USA
- Department of Radiology, University of Limerick Hospital Group, Limerick, Ireland
- Clinical Learning, Evidence, And Research [CLEAR] Center for Musculoskeletal Disorders, Seattle, USA
| | - K F Linnau
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Q Dong
- Clinical Learning, Evidence, And Research [CLEAR] Center for Musculoskeletal Disorders, Seattle, USA
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA
| | - S K Johnston
- Department of Radiology, University of Washington, Seattle, WA, USA
- Clinical Learning, Evidence, And Research [CLEAR] Center for Musculoskeletal Disorders, Seattle, USA
| | - J G Jarvik
- Department of Radiology, University of Washington, Seattle, WA, USA
- Clinical Learning, Evidence, And Research [CLEAR] Center for Musculoskeletal Disorders, Seattle, USA
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA
| | - N M Cross
- Department of Radiology, University of Washington, Seattle, WA, USA
- Clinical Learning, Evidence, And Research [CLEAR] Center for Musculoskeletal Disorders, Seattle, USA
| |
Collapse
|
81
|
Woznicki P, Siedek F, van Gastel MD, dos Santos DP, Arjune S, Karner LA, Meyer F, Caldeira LL, Persigehl T, Gansevoort RT, Grundmann F, Baessler B, Müller RU. Automated Kidney and Liver Segmentation in MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease: A Multicenter Study. KIDNEY360 2022; 3:2048-2058. [PMID: 36591351 PMCID: PMC9802567 DOI: 10.34067/kid.0003192022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 09/19/2022] [Indexed: 12/31/2022]
Abstract
Background Imaging-based total kidney volume (TKV) and total liver volume (TLV) are major prognostic factors in autosomal dominant polycystic kidney disease (ADPKD) and end points for clinical trials. However, volumetry is time consuming and reader dependent in clinical practice. Our aim was to develop a fully automated method for joint kidney and liver segmentation in magnetic resonance imaging (MRI) and to evaluate its performance in a multisequence, multicenter setting. Methods The convolutional neural network was trained on a large multicenter dataset consisting of 992 MRI scans of 327 patients. Manual segmentation delivered ground-truth labels. The model's performance was evaluated in a separate test dataset of 93 patients (350 MRI scans) as well as a heterogeneous external dataset of 831 MRI scans from 323 patients. Results The segmentation model yielded excellent performance, achieving a median per study Dice coefficient of 0.92-0.97 for the kidneys and 0.96 for the liver. Automatically computed TKV correlated highly with manual measurements (intraclass correlation coefficient [ICC]: 0.996-0.999) with low bias and high precision (-0.2%±4% for axial images and 0.5%±4% for coronal images). TLV estimation showed an ICC of 0.999 and bias/precision of -0.5%±3%. For the external dataset, the automated TKV demonstrated bias and precision of -1%±7%. Conclusions Our deep learning model enabled accurate segmentation of kidneys and liver and objective assessment of TKV and TLV. Importantly, this approach was validated with axial and coronal MRI scans from 40 different scanners, making implementation in clinical routine care feasible.Clinical Trial registry name and registration number: The German ADPKD Tolvaptan Treatment Registry (AD[H]PKD), NCT02497521.
Collapse
Affiliation(s)
- Piotr Woznicki
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany,Department of Diagnostic and Interventional Radiology, University Hospital Wuerzburg, Wuerzburg, Germany
| | - Florian Siedek
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany
| | - Maatje D.A. van Gastel
- Department of Nephrology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Daniel Pinto dos Santos
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany,Institute of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt, Germany
| | - Sita Arjune
- Department II of Internal Medicine and Center for Molecular Medicine Cologne, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| | - Larina A. Karner
- Department II of Internal Medicine and Center for Molecular Medicine Cologne, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| | - Franziska Meyer
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany
| | - Liliana Lourenco Caldeira
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany
| | - Thorsten Persigehl
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany
| | - Ron T. Gansevoort
- Department of Nephrology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Franziska Grundmann
- Department II of Internal Medicine and Center for Molecular Medicine Cologne, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| | - Bettina Baessler
- Institute of Diagnostic and Interventional Radiology, University of Cologne, University Hospital Cologne, Cologne, Germany,Department of Diagnostic and Interventional Radiology, University Hospital Wuerzburg, Wuerzburg, Germany
| | - Roman-Ulrich Müller
- Cologne Excellence Cluster on Cellular Stress Responses in Aging-Associated Diseases (CECAD) University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| |
Collapse
|
82
|
Suter Y, Knecht U, Valenzuela W, Notter M, Hewer E, Schucht P, Wiest R, Reyes M. The LUMIERE dataset: Longitudinal Glioblastoma MRI with expert RANO evaluation. Sci Data 2022; 9:768. [PMID: 36522344 PMCID: PMC9755255 DOI: 10.1038/s41597-022-01881-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022] Open
Abstract
Publicly available Glioblastoma (GBM) datasets predominantly include pre-operative Magnetic Resonance Imaging (MRI) or contain few follow-up images for each patient. Access to fully longitudinal datasets is critical to advance the refinement of treatment response assessment. We release a single-center longitudinal GBM MRI dataset with expert ratings of selected follow-up studies according to the response assessment in neuro-oncology criteria (RANO). The expert rating includes details about the rationale of the ratings. For a subset of patients, we provide pathology information regarding methylation of the O6-methylguanine-DNA methyltransferase (MGMT) promoter status and isocitrate dehydrogenase 1 (IDH1), as well as the overall survival time. The data includes T1-weighted pre- and post-contrast, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) MRI. Segmentations from state-of-the-art automated segmentation tools, as well as radiomic features, complement the data. Possible applications of this dataset are radiomics research, the development and validation of automated segmentation methods, and studies on response assessment. This collection includes MRI data of 91 GBM patients with a total of 638 study dates and 2487 images.
Collapse
Affiliation(s)
- Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
| | - Urspeter Knecht
- Radiology Department, Spital Emmental, Burgdorf, Switzerland
| | - Waldo Valenzuela
- Support Center for Advanced Neuroimaging, Inselspital, Bern, Switzerland
| | | | - Ekkehard Hewer
- Institute of Pathology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | | | - Roland Wiest
- Support Center for Advanced Neuroimaging, Inselspital, Bern, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
| |
Collapse
|
83
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
84
|
Liu X, Elbanan MG, Luna A, Haider MA, Smith AD, Sabottke CF, Spieler BM, Turkbey B, Fuentes D, Moawad A, Kamel S, Horvat N, Elsayes KM. Radiomics in Abdominopelvic Solid-Organ Oncologic Imaging: Current Status. AJR Am J Roentgenol 2022; 219:985-995. [PMID: 35766531 PMCID: PMC10616929 DOI: 10.2214/ajr.22.27695] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Radiomics is the process of extraction of high-throughput quantitative imaging features from medical images. These features represent noninvasive quantitative biomarkers that go beyond the traditional imaging features visible to the human eye. This article first reviews the steps of the radiomics pipeline, including image acquisition, ROI selection and image segmentation, image preprocessing, feature extraction, feature selection, and model development and application. Current evidence for the application of radiomics in abdominopelvic solid-organ cancers is then reviewed. Applications including diagnosis, subtype determination, treatment response assessment, and outcome prediction are explored within the context of hepatobiliary and pancreatic cancer, renal cell carcinoma, prostate cancer, gynecologic cancer, and adrenal masses. This literature review focuses on the strongest available evidence, including systematic reviews, meta-analyses, and large multicenter studies. Limitations of the available literature are highlighted, including marked heterogeneity in radiomics methodology, frequent use of small sample sizes with high risk of overfitting, and lack of prospective design, external validation, and standardized radiomics workflow. Thus, although studies have laid a foundation that supports continued investigation into radiomics models, stronger evidence is needed before clinical adoption.
Collapse
Affiliation(s)
- Xiaoyang Liu
- Joint Department of Medical Imaging, Division of Abdominal Imaging, University Health Network, University of Toronto, ON, Canada
| | - Mohamed G Elbanan
- Department of Radiology, Yale New Haven Health, Bridgeport Hospital, Bridgeport, CT
| | | | - Masoom A Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, Canada
| | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL
| | - Carl F Sabottke
- Department of Medical Imaging, University of Arizona College of Medicine, Tucson, AZ
| | - Bradley M Spieler
- Department of Radiology, University Medical Center, Louisiana State University Health Sciences Center, New Orleans, LA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, NIH, Bethesda, MD
| | - David Fuentes
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Ahmed Moawad
- Department of Diagnostic and Interventional Radiology, Mercy Catholic Medical Center, Darby, PA
| | - Serageldin Kamel
- Department of Lymphoma, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Natally Horvat
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Khaled M Elsayes
- Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, 1400 Pressler St, Houston, TX 77030
| |
Collapse
|
85
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
86
|
Kelly BS, Judge C, Bollard SM, Clifford SM, Healy GM, Aziz A, Mathur P, Islam S, Yeom KW, Lawlor A, Killeen RP. Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE). Eur Radiol 2022; 32:7998-8007. [PMID: 35420305 PMCID: PMC9668941 DOI: 10.1007/s00330-022-08784-6] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 03/17/2022] [Accepted: 03/26/2022] [Indexed: 01/07/2023]
Abstract
OBJECTIVE There has been a large amount of research in the field of artificial intelligence (AI) as applied to clinical radiology. However, these studies vary in design and quality and systematic reviews of the entire field are lacking.This systematic review aimed to identify all papers that used deep learning in radiology to survey the literature and to evaluate their methods. We aimed to identify the key questions being addressed in the literature and to identify the most effective methods employed. METHODS We followed the PRISMA guidelines and performed a systematic review of studies of AI in radiology published from 2015 to 2019. Our published protocol was prospectively registered. RESULTS Our search yielded 11,083 results. Seven hundred sixty-seven full texts were reviewed, and 535 articles were included. Ninety-eight percent were retrospective cohort studies. The median number of patients included was 460. Most studies involved MRI (37%). Neuroradiology was the most common subspecialty. Eighty-eight percent used supervised learning. The majority of studies undertook a segmentation task (39%). Performance comparison was with a state-of-the-art model in 37%. The most used established architecture was UNet (14%). The median performance for the most utilised evaluation metrics was Dice of 0.89 (range .49-.99), AUC of 0.903 (range 1.00-0.61) and Accuracy of 89.4 (range 70.2-100). Of the 77 studies that externally validated their results and allowed for direct comparison, performance on average decreased by 6% at external validation (range increase of 4% to decrease 44%). CONCLUSION This systematic review has surveyed the major advances in AI as applied to clinical radiology. KEY POINTS • While there are many papers reporting expert-level results by using deep learning in radiology, most apply only a narrow range of techniques to a narrow selection of use cases. • The literature is dominated by retrospective cohort studies with limited external validation with high potential for bias. • The recent advent of AI extensions to systematic reporting guidelines and prospective trial registration along with a focus on external validation and explanations show potential for translation of the hype surrounding AI from code to clinic.
Collapse
Affiliation(s)
- Brendan S Kelly
- St Vincent's University Hospital, Dublin, Ireland.
- Insight Centre for Data Analytics, UCD, Dublin, Ireland.
- Wellcome Trust - HRB, Irish Clinical Academic Training, Dublin, Ireland.
- School of Medicine, University College Dublin, Dublin, Ireland.
- HRB-Clinical Research Facility, NUI Galway, Galway, Ireland.
| | - Conor Judge
- Wellcome Trust - HRB, Irish Clinical Academic Training, Dublin, Ireland
- Lucille Packard Children's Hospital at Stanford, Stanford, CA, USA
| | - Stephanie M Bollard
- Wellcome Trust - HRB, Irish Clinical Academic Training, Dublin, Ireland
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | | | - Awsam Aziz
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Shah Islam
- Division of Brain Sciences, Imperial College London, GN1 Commonwealth Building, Hammersmith Hospital, Du Cane Road, London, W12 0HS, UK
| | - Kristen W Yeom
- HRB-Clinical Research Facility, NUI Galway, Galway, Ireland
| | | | - Ronan P Killeen
- St Vincent's University Hospital, Dublin, Ireland
- School of Medicine, University College Dublin, Dublin, Ireland
| |
Collapse
|
87
|
Sivanesan U, Wu K, McInnes MDF, Dhindsa K, Salehi F, van der Pol CB. Checklist for Artificial Intelligence in Medical Imaging Reporting Adherence in Peer-Reviewed and Preprint Manuscripts With the Highest Altmetric Attention Scores: A Meta-Research Study. Can Assoc Radiol J 2022; 74:334-342. [PMID: 36301600 DOI: 10.1177/08465371221134056] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Purpose: To establish reporting adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) in diagnostic accuracy AI studies with the highest Altmetric Attention Scores (AAS), and to compare completeness of reporting between peer-reviewed manuscripts and preprints. Methods: MEDLINE, EMBASE, arXiv, bioRxiv, and medRxiv were retrospectively searched for 100 diagnostic accuracy medical imaging AI studies in peer-reviewed journals and preprint platforms with the highest AAS since the release of CLAIM to June 24, 2021. Studies were evaluated for adherence to the 42-item CLAIM checklist with comparison between peer-reviewed manuscripts and preprints. The impact of additional factors was explored including body region, models on COVID-19 diagnosis and journal impact factor. Results: Median CLAIM adherence was 48% (20/42). The median CLAIM score of manuscripts published in peer-reviewed journals was higher than preprints, 57% (24/42) vs 40% (16/42), P < .0001. Chest radiology was the body region with the least complete reporting ( P = .0352), with manuscripts on COVID-19 less complete than others (43% vs 54%, P = .0002). For studies published in peer-reviewed journals with an impact factor, the CLAIM score correlated with impact factor, rho = 0.43, P = .0040. Completeness of reporting based on CLAIM score had a positive correlation with a study’s AAS, rho = 0.68, P < .0001. Conclusions: Overall reporting adherence to CLAIM is low in imaging diagnostic accuracy AI studies with the highest AAS, with preprints reporting fewer study details than peer-reviewed manuscripts. Improved CLAIM adherence could promote adoption of AI into clinical practice and facilitate investigators building upon prior works.
Collapse
Affiliation(s)
- Umaseh Sivanesan
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston General Hospital, Kingston, ON, Canada
| | - Kay Wu
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Matthew D. F. McInnes
- Department of Radiology and Epidemiology, University of Ottawa, Ottawa, ON, Canada; Scientist Ottawa Hospital Research Institute Clinical Epidemiology Program, Rm c-159 Department of Medical Imaging, The Ottawa Hospital- Civic Campus, Ottawa, ON, Canada
| | - Kiret Dhindsa
- Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Berlin, Germany; Department of Neurology with Experimental Neurology, Brain Simulation Section, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Fateme Salehi
- Department of Radiology, Scarborough Health Network, Scarborough, ON, Canada
| | - Christian B. van der Pol
- Department of Diagnostic Imaging, Juravinski Hospital and Cancer Centre, Hamilton Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
88
|
Momtazmanesh S, Nowroozi A, Rezaei N. Artificial Intelligence in Rheumatoid Arthritis: Current Status and Future Perspectives: A State-of-the-Art Review. Rheumatol Ther 2022; 9:1249-1304. [PMID: 35849321 PMCID: PMC9510088 DOI: 10.1007/s40744-022-00475-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/24/2022] [Indexed: 11/23/2022] Open
Abstract
Investigation of the potential applications of artificial intelligence (AI), including machine learning (ML) and deep learning (DL) techniques, is an exponentially growing field in medicine and healthcare. These methods can be critical in providing high-quality care to patients with chronic rheumatological diseases lacking an optimal treatment, like rheumatoid arthritis (RA), which is the second most prevalent autoimmune disease. Herein, following reviewing the basic concepts of AI, we summarize the advances in its applications in RA clinical practice and research. We provide directions for future investigations in this field after reviewing the current knowledge gaps and technical and ethical challenges in applying AI. Automated models have been largely used to improve RA diagnosis since the early 2000s, and they have used a wide variety of techniques, e.g., support vector machine, random forest, and artificial neural networks. AI algorithms can facilitate screening and identification of susceptible groups, diagnosis using omics, imaging, clinical, and sensor data, patient detection within electronic health record (EHR), i.e., phenotyping, treatment response assessment, monitoring disease course, determining prognosis, novel drug discovery, and enhancing basic science research. They can also aid in risk assessment for incidence of comorbidities, e.g., cardiovascular diseases, in patients with RA. However, the proposed models may vary significantly in their performance and reliability. Despite the promising results achieved by AI models in enhancing early diagnosis and management of patients with RA, they are not fully ready to be incorporated into clinical practice. Future investigations are required to ensure development of reliable and generalizable algorithms while they carefully look for any potential source of bias or misconduct. We showed that a growing body of evidence supports the potential role of AI in revolutionizing screening, diagnosis, and management of patients with RA. However, multiple obstacles hinder clinical applications of AI models. Incorporating the machine and/or deep learning algorithms into real-world settings would be a key step in the progress of AI in medicine.
Collapse
Affiliation(s)
- Sara Momtazmanesh
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Network of Immunity in Infection, Malignancy and Autoimmunity (NIIMA), Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Research Center for Immunodeficiencies, Pediatrics Center of Excellence, Children's Medical Center, Tehran University of Medical Sciences, Dr. Gharib St, Keshavarz Blvd, Tehran, Iran
| | - Ali Nowroozi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Network of Immunity in Infection, Malignancy and Autoimmunity (NIIMA), Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Nima Rezaei
- Network of Immunity in Infection, Malignancy and Autoimmunity (NIIMA), Universal Scientific Education and Research Network (USERN), Tehran, Iran.
- Research Center for Immunodeficiencies, Pediatrics Center of Excellence, Children's Medical Center, Tehran University of Medical Sciences, Dr. Gharib St, Keshavarz Blvd, Tehran, Iran.
- Department of Immunology, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
89
|
Pareek A, Lungren MP, Halabi SS. The requirements for performing artificial-intelligence-related research and model development. Pediatr Radiol 2022; 52:2094-2100. [PMID: 35996023 DOI: 10.1007/s00247-022-05483-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 07/06/2022] [Accepted: 08/09/2022] [Indexed: 11/25/2022]
Abstract
Artificial intelligence research in health care has undergone tremendous growth in the last several years thanks to the explosion of digital health care data and systems that can leverage large amounts of data to learn patterns that can be applied to clinical tasks. In addition, given broad acceleration in machine learning across industries like transportation, media and commerce, there has been a significant growth in demand for machine-learning practitioners such as engineers and data scientists, who have skill sets that can be applied to health care use cases but who simultaneously lack important health care domain expertise. The purpose of this paper is to discuss the requirements of building an artificial-intelligence research enterprise including the research team, technical software/hardware, and procurement and curation of health care data.
Collapse
Affiliation(s)
- Anuj Pareek
- Stanford AIMI Center, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA.
| | - Matthew P Lungren
- Stanford AIMI Center, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA
| | - Safwan S Halabi
- Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, USA
| |
Collapse
|
90
|
Taloni A, Farrelly FA, Pontillo G, Petsas N, Giannì C, Ruggieri S, Petracca M, Brunetti A, Pozzilli C, Pantano P, Tommasin S. Evaluation of Disability Progression in Multiple Sclerosis via Magnetic-Resonance-Based Deep Learning Techniques. Int J Mol Sci 2022; 23:ijms231810651. [PMID: 36142563 PMCID: PMC9505100 DOI: 10.3390/ijms231810651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 09/02/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Short-term disability progression was predicted from a baseline evaluation in patients with multiple sclerosis (MS) using their three-dimensional T1-weighted (3DT1) magnetic resonance images (MRI). One-hundred-and-eighty-one subjects diagnosed with MS underwent 3T-MRI and were followed up for two to six years at two sites, with disability progression defined according to the expanded-disability-status-scale (EDSS) increment at the follow-up. The patients’ 3DT1 images were bias-corrected, brain-extracted, registered onto MNI space, and divided into slices along coronal, sagittal, and axial projections. Deep learning image classification models were applied on slices and devised as ResNet50 fine-tuned adaptations at first on a large independent dataset and secondly on the study sample. The final classifiers’ performance was evaluated via the area under the curve (AUC) of the false versus true positive diagram. Each model was also tested against its null model, obtained by reshuffling patients’ labels in the training set. Informative areas were found by intersecting slices corresponding to models fulfilling the disability progression prediction criteria. At follow-up, 34% of patients had disability progression. Five coronal and five sagittal slices had one classifier surviving the AUC evaluation and null test and predicted disability progression (AUC > 0.72 and AUC > 0.81, respectively). Likewise, fifteen combinations of classifiers and axial slices predicted disability progression in patients (AUC > 0.69). Informative areas were the frontal areas, mainly within the grey matter. Briefly, 3DT1 images may give hints on disability progression in MS patients, exploiting the information hidden in the MRI of specific areas of the brain.
Collapse
Affiliation(s)
- Alessandro Taloni
- Institute for Complex Systems, National Research Council (ISC-CNR), 00185 Rome, Italy
| | | | - Giuseppe Pontillo
- Department of Advanced Biomedical Sciences, Federico II University of Naples, 80131 Naples, Italy
- Department of Electrical Engineering and Information Technology, Federico II University of Naples, 80125 Naples, Italy
| | - Nikolaos Petsas
- Department of Radiology, IRCCS NEUROMED, 86077 Pozzilli, Italy
| | - Costanza Giannì
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
| | - Serena Ruggieri
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
- Neuroimmunology Unit, IRCSS Fondazione Santa Lucia, 00179 Rome, Italy
| | - Maria Petracca
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
- Department of Neuroscience, Reproductive Sciences and Odontostomatology, Federico II University of Naples, 80131 Naples, Italy
| | - Arturo Brunetti
- Department of Advanced Biomedical Sciences, Federico II University of Naples, 80131 Naples, Italy
| | - Carlo Pozzilli
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
| | - Patrizia Pantano
- Department of Radiology, IRCCS NEUROMED, 86077 Pozzilli, Italy
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
| | - Silvia Tommasin
- Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
- Correspondence:
| |
Collapse
|
91
|
Pang H, Yu Z, Yu H, Chang M, Cao J, Li Y, Guo M, Liu Y, Cao K, Fan G. Multimodal striatal neuromarkers in distinguishing parkinsonian variant of multiple system atrophy from idiopathic Parkinson's disease. CNS Neurosci Ther 2022; 28:2172-2182. [PMID: 36047435 PMCID: PMC9627351 DOI: 10.1111/cns.13959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 02/06/2023] Open
Abstract
AIMS To develop an automatic method of classification for parkinsonian variant of multiple system atrophy (MSA-P) and Idiopathic Parkinson's disease (IPD) in early to moderately advanced stages based on multimodal striatal alterations and identify the striatal neuromarkers for distinction. METHODS 77 IPD and 75 MSA-P patients underwent 3.0 T multimodal MRI comprising susceptibility-weighted imaging, resting-state functional magnetic resonance imaging, T1-weighted imaging, and diffusion tensor imaging. Iron-radiomic features, volumes, functional and diffusion scalars of bilateral 10 striatal subregions were calculated and provided to the support vector machine for classification RESULTS: A combination of iron-radiomic features, function, diffusion, and volumetric measures optimally distinguished IPD and MSA-P in the testing dataset (accuracy 0.911 and area under the receiver operating characteristic curves [AUC] 0.927). The diagnostic performance further improved when incorporating clinical variables into the multimodal model (accuracy 0.934 and AUC 0.953). The most crucial factor for classification was the functional activity of the left dorsolateral putamen. CONCLUSION The machine learning algorithm applied to multimodal striatal dysfunction depicted dorsal striatum and supervening prefrontal lobe and cerebellar dysfunction through the frontostriatal and cerebello-striatal connections and facilitated accurate classification between IPD and MSA-P. The dorsolateral putamen was the most valuable neuromarker for the classification.
Collapse
Affiliation(s)
- Huize Pang
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Ziyang Yu
- School of MedicineXiamen UniversityXiamenChina
| | - Hongmei Yu
- Department of NeurologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Miao Chang
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Jibin Cao
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Yingmei Li
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Miaoran Guo
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Yu Liu
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Kaiqiang Cao
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| | - Guoguang Fan
- Department of RadiologyThe first Affiliated Hospital of China Medical UniversityShenyangChina
| |
Collapse
|
92
|
Venkatesh K, Santomartino SM, Sulam J, Yi PH. Code and Data Sharing Practices in the Radiology Artificial Intelligence Literature: A Meta-Research Study. Radiol Artif Intell 2022; 4:e220081. [PMID: 36204536 PMCID: PMC9530751 DOI: 10.1148/ryai.220081] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE To evaluate code and data sharing practices in original artificial intelligence (AI) scientific manuscripts published in the Radiological Society of North America (RSNA) journals suite from 2017 through 2021. MATERIALS AND METHODS A retrospective meta-research study was conducted of articles published in the RSNA journals suite from January 1, 2017, through December 31, 2021. A total of 218 articles were included and evaluated for code sharing practices, reproducibility of shared code, and data sharing practices. Categorical comparisons were conducted using Fisher exact tests with respect to year and journal of publication, author affiliation(s), and type of algorithm used. RESULTS Of the 218 included articles, 73 (34%) shared code, with 24 (33% of code sharing articles and 11% of all articles) sharing reproducible code. Radiology and Radiology: Artificial Intelligence published the most code sharing articles (48 [66%] and 21 [29%], respectively). Twenty-nine articles (13%) shared data, and 12 of these articles (41% of data sharing articles) shared complete experimental data by using only public domain datasets. Four of the 218 articles (2%) shared both code and complete experimental data. Code sharing rates were statistically higher in 2020 and 2021 compared with earlier years (P < .01) and were higher in Radiology and Radiology: Artificial Intelligence compared with other journals (P < .01). CONCLUSION Original AI scientific articles in the RSNA journals suite had low rates of code and data sharing, emphasizing the need for open-source code and data to achieve transparent and reproducible science.Keywords: Meta-Analysis, AI in Education, Machine LearningSupplemental material is available for this article.© RSNA, 2022.
Collapse
|
93
|
Machine learning prediction of hematoma expansion in acute intracerebral hemorrhage. Sci Rep 2022; 12:12452. [PMID: 35864139 PMCID: PMC9304401 DOI: 10.1038/s41598-022-15400-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 06/23/2022] [Indexed: 12/28/2022] Open
Abstract
To examine whether machine learning (ML) approach can be used to predict hematoma expansion in acute intracerebral hemorrhage (ICH) with accuracy and widespread applicability, we applied ML algorithms to multicenter clinical data and CT findings on admission. Patients with acute ICH from three hospitals (n = 351) and those from another hospital (n = 71) were retrospectively assigned to the development and validation cohorts, respectively. To develop ML predictive models, the k-nearest neighbors (k-NN) algorithm, logistic regression, support vector machines (SVMs), random forests, and XGBoost were applied to the patient data in the development cohort. The models were evaluated for their performance on the patient data in the validation cohort, which was compared with previous scoring methods, the BAT, BRAIN, and 9-point scores. The k-NN algorithm achieved the highest area under the receiver operating characteristic curve (AUC) of 0.790 among all ML models, and the sensitivity, specificity, and accuracy were 0.846, 0.733, and 0.775, respectively. The BRAIN score achieved the highest AUC of 0.676 among all previous scoring methods, which was lower than the k-NN algorithm (p = 0.016). We developed and validated ML predictive models of hematoma expansion in acute ICH. The models demonstrated good predictive ability, showing better performance than the previous scoring methods.
Collapse
|
94
|
Kido A, Nishio M. MRI-based Radiomics Models for Pretreatment Risk Stratification of Endometrial Cancer. Radiology 2022; 305:387-389. [PMID: 35819329 DOI: 10.1148/radiol.221398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Aki Kido
- From the Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Mizuho Nishio
- From the Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| |
Collapse
|
95
|
Wang X, Fan Y, Zhang N, Li J, Duan Y, Yang B. Performance of Machine Learning for Tissue Outcome Prediction in Acute Ischemic Stroke: A Systematic Review and Meta-Analysis. Front Neurol 2022; 13:910259. [PMID: 35873778 PMCID: PMC9305175 DOI: 10.3389/fneur.2022.910259] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 06/20/2022] [Indexed: 12/03/2022] Open
Abstract
Machine learning (ML) has been proposed for lesion segmentation in acute ischemic stroke (AIS). This study aimed to provide a systematic review and meta-analysis of the overall performance of current ML algorithms for final infarct prediction from baseline imaging. We made a comprehensive literature search on eligible studies developing ML models for core infarcted tissue estimation on admission CT or MRI in AIS patients. Eleven studies meeting the inclusion criteria were included in the quantitative analysis. Study characteristics, model methodology, and predictive performance of the included studies were extracted. A meta-analysis was conducted on the dice similarity coefficient (DSC) score by using a random-effects model to assess the overall predictive performance. Study heterogeneity was assessed by Cochrane Q and Higgins I2 tests. The pooled DSC score of the included ML models was 0.50 (95% CI 0.39–0.61), with high heterogeneity observed across studies (I2 96.5%, p < 0.001). Sensitivity analyses using the one-study removed method showed the adjusted overall DSC score ranged from 0.47 to 0.52. Subgroup analyses indicated that the DL-based models outperformed the conventional ML classifiers with the best performance observed in DL algorithms combined with CT data. Despite the presence of heterogeneity, current ML-based approaches for final infarct prediction showed moderate but promising performance. Before well integrated into clinical stroke workflow, future investigations are suggested to train ML models on large-scale, multi-vendor data, validate on external cohorts and adopt formalized reporting standards for improving model accuracy and robustness.
Collapse
Affiliation(s)
- Xinrui Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Yiming Fan
- Department of Orthopedics, Chinese PLA General Hospital, Beijing, China
| | - Nan Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Jing Li
- Department of Radiology, Changhai Hospital, Shanghai, China
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Benqiang Yang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- *Correspondence: Benqiang Yang
| |
Collapse
|
96
|
Liu Y, Duan H, Dong D, Chen J, Zhong L, Zhang L, Cao R, Fan H, Cui Z, Liu P, Kang S, Zhan X, Wang S, Zhao X, Chen C, Tian J. Development of a deep learning-based nomogram for predicting lymph node metastasis in cervical cancer: A multicenter study. Clin Transl Med 2022; 12:e938. [PMID: 35839331 PMCID: PMC9286523 DOI: 10.1002/ctm2.938] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 05/28/2022] [Accepted: 06/05/2022] [Indexed: 11/23/2022] Open
Affiliation(s)
- Yujia Liu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Hui Duan
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Di Dong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, China
| | - Jiaming Chen
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, Guangzhou, China.,Huizhou Municipal central Hospital, Huizhou, China
| | - Lianzhen Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Liwen Zhang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Runnan Cao
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Huijian Fan
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Zhumei Cui
- The affiliated hospital of Qingdao University, Qingdao, China
| | - Ping Liu
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Shan Kang
- Department of Gynecology, Fourth Hospital Hebei Medical University, Shijiazhuang, China
| | | | - Shaoguang Wang
- Department of Gynecology, Yantai Yuhuangding Hospital, Yantai, China
| | - Xun Zhao
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chunlin Chen
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Beijing Key Laboratory of Molecular Imaging, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China.,Zhuhai Precision Medical Center, Zhuhai People's Hospital (Affiliated with Jinan University), Zhuhai, China
| |
Collapse
|
97
|
Sun R, Henry T, Laville A, Carré A, Hamaoui A, Bockel S, Chaffai I, Levy A, Chargari C, Robert C, Deutsch E. Imaging approaches and radiomics: toward a new era of ultraprecision radioimmunotherapy? J Immunother Cancer 2022; 10:e004848. [PMID: 35793875 PMCID: PMC9260846 DOI: 10.1136/jitc-2022-004848] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/21/2022] [Indexed: 11/17/2022] Open
Abstract
Strong rationale and a growing number of preclinical and clinical studies support combining radiotherapy and immunotherapy to improve patient outcomes. However, several critical questions remain, such as the identification of patients who will benefit from immunotherapy and the identification of the best modalities of treatment to optimize patient response. Imaging biomarkers and radiomics have recently emerged as promising tools for the non-invasive assessment of the whole disease of the patient, allowing comprehensive analysis of the tumor microenvironment, the spatial heterogeneity of the disease and its temporal changes. This review presents the potential applications of medical imaging and the challenges to address, in order to help clinicians choose the optimal modalities of both radiotherapy and immunotherapy, to predict patient's outcomes and to assess response to these promising combinations.
Collapse
Affiliation(s)
- Roger Sun
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Théophraste Henry
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
- Department of Nuclear Medicine, Gustave Roussy, Villejuif, France
| | - Adrien Laville
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Alexandre Carré
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Anthony Hamaoui
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Sophie Bockel
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Ines Chaffai
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Antonin Levy
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Cyrus Chargari
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
- Department of Radiation Oncology, Brachytherapy Unit, Gustave Roussy, Villejuif, France
| | - Charlotte Robert
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
| | - Eric Deutsch
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
- Radiothérapie Moléculaire et Innovation Thérapeutique, Université Paris-Saclay, Institut Gustave Roussy, Inserm, Villejuif, France
- INSERM U1030, Gustave Roussy, Villejuif, France
| |
Collapse
|
98
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
99
|
Borrelli P, Góngora JLL, Kaboteh R, Enqvist O, Edenbrandt L. Automated Classification of PET‐CT Lesions in Lung Cancer: An Independent Validation Study. Clin Physiol Funct Imaging 2022; 42:327-332. [PMID: 35760559 PMCID: PMC9540653 DOI: 10.1111/cpf.12773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 06/15/2022] [Accepted: 06/22/2022] [Indexed: 12/04/2022]
Abstract
Introduction Recently, a tool called the positron emission tomography (PET)‐assisted reporting system (PARS) was developed and presented to classify lesions in PET/computed tomography (CT) studies in patients with lung cancer or lymphoma. The aim of this study was to validate PARS with an independent group of lung‐cancer patients using manual lesion segmentations as a reference standard, as well as to evaluate the association between PARS‐based measurements and overall survival (OS). Methods This study retrospectively included 115 patients who had undergone clinically indicated (18F)‐fluorodeoxyglucose (FDG) PET/CT due to suspected or known lung cancer. The patients had a median age of 66 years (interquartile range [IQR]: 61–72 years). Segmentations were made manually by visual inspection in a consensus reading by two nuclear medicine specialists and used as a reference. The research prototype PARS was used to automatically analyse all the PET/CT studies. The PET foci classified as suspicious by PARS were compared with the manual segmentations. No manual corrections were applied. Total lesion glycolysis (TLG) was calculated based on the manual and PARS‐based lung‐tumour segmentations. Associations between TLG and OS were investigated using Cox analysis. Results PARS showed sensitivities for lung tumours of 55.6% per lesion and 80.2% per patient. Both manual and PARS TLG were significantly associated with OS. Conclusion Automatically calculated TLG by PARS contains prognostic information comparable to manually measured TLG in patients with known or suspected lung cancer. The low sensitivity at both the lesion and patient levels makes the present version of PARS less useful to support clinical reading, reporting and staging.
Collapse
Affiliation(s)
- Pablo Borrelli
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | - José Luis Loaiza Góngora
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | - Reza Kaboteh
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
| | | | - Lars Edenbrandt
- Region Västra Götaland, Sahlgrenska University HospitalDepartment of Clinical PhysiologyGothenburgSweden
- Department of Molecular and Clinical Medicine, Institute of MedicineSahlgrenska Academy, University of GothenburgGothenburgSweden
| |
Collapse
|
100
|
Combination of Whole-Body Baseline CT Radiomics and Clinical Parameters to Predict Response and Survival in a Stage-IV Melanoma Cohort Undergoing Immunotherapy. Cancers (Basel) 2022; 14:cancers14122992. [PMID: 35740659 PMCID: PMC9221470 DOI: 10.3390/cancers14122992] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 06/13/2022] [Accepted: 06/15/2022] [Indexed: 11/17/2022] Open
Abstract
Simple Summary The use of immunotherapeutic agents significantly improved stage-IV melanoma patients’ overall progression-free survival. To identify patients who do not benefit from immunotherapy, both clinical parameters and experimental biomarkers such as radiomics are currently being evaluated. However, no radiomic biomarker is widely accepted for routine clinical use. In a large cohort of 262 stage-IV melanoma patients given first-line immunotherapy treatment, we investigated whether radiomics—based on the segmentation of all baseline metastases in the whole body—in combination with clinical parameters offered added value compared to the usage of clinical parameters alone in a machine-learning prediction model. The primary endpoints were response at three months, and survival rates at six and twelve months. The study indicated a potential, but non-significant, added value of radiomics for six-month and twelve-month survival prediction, thus underlining the relevance of clinical parameters. Abstract Background: This study investigated whether a machine-learning-based combination of radiomics and clinical parameters was superior to the use of clinical parameters alone in predicting therapy response after three months, and overall survival after six and twelve months, in stage-IV malignant melanoma patients undergoing immunotherapy with PD-1 checkpoint inhibitors and CTLA-4 checkpoint inhibitors. Methods: A random forest model using clinical parameters (demographic variables and tumor markers = baseline model) was compared to a random forest model using clinical parameters and radiomics (extended model) via repeated 5-fold cross-validation. For this purpose, the baseline computed tomographies of 262 stage-IV malignant melanoma patients treated at a tertiary referral center were identified in the Central Malignant Melanoma Registry, and all visible metastases were three-dimensionally segmented (n = 6404). Results: The extended model was not significantly superior compared to the baseline model for survival prediction after six and twelve months (AUC (95% CI): 0.664 (0.598, 0.729) vs. 0.620 (0.545, 0.692) and AUC (95% CI): 0.600 (0.526, 0.667) vs. 0.588 (0.481, 0.629), respectively). The extended model was not significantly superior compared to the baseline model for response prediction after three months (AUC (95% CI): 0.641 (0.581, 0.700) vs. 0.656 (0.587, 0.719)). Conclusions: The study indicated a potential, but non-significant, added value of radiomics for six-month and twelve-month survival prediction of stage-IV melanoma patients undergoing immunotherapy.
Collapse
|