1
|
Kong F, Wang X, Xiang J, Yang S, Wang X, Yue M, Zhang J, Zhao J, Han X, Dong Y, Zhu B, Wang F, Liu Y. Federated attention consistent learning models for prostate cancer diagnosis and Gleason grading. Comput Struct Biotechnol J 2024; 23:1439-1449. [PMID: 38623561 PMCID: PMC11016961 DOI: 10.1016/j.csbj.2024.03.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 04/17/2024] Open
Abstract
Artificial intelligence (AI) holds significant promise in transforming medical imaging, enhancing diagnostics, and refining treatment strategies. However, the reliance on extensive multicenter datasets for training AI models poses challenges due to privacy concerns. Federated learning provides a solution by facilitating collaborative model training across multiple centers without sharing raw data. This study introduces a federated attention-consistent learning (FACL) framework to address challenges associated with large-scale pathological images and data heterogeneity. FACL enhances model generalization by maximizing attention consistency between local clients and the server model. To ensure privacy and validate robustness, we incorporated differential privacy by introducing noise during parameter transfer. We assessed the effectiveness of FACL in cancer diagnosis and Gleason grading tasks using 19,461 whole-slide images of prostate cancer from multiple centers. In the diagnosis task, FACL achieved an area under the curve (AUC) of 0.9718, outperforming seven centers with an average AUC of 0.9499 when categories are relatively balanced. For the Gleason grading task, FACL attained a Kappa score of 0.8463, surpassing the average Kappa score of 0.7379 from six centers. In conclusion, FACL offers a robust, accurate, and cost-effective AI training model for prostate cancer pathology while maintaining effective data safeguards.
Collapse
Affiliation(s)
- Fei Kong
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, 610065, China
| | | | - Sen Yang
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Xinran Wang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| | - Meng Yue
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| | - Jun Zhang
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Junhan Zhao
- Massachusetts General Hospital, Boston, MA, 02114, United States
- Harvard T.H. Chan School of Public Health, Boston, MA, 02115, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, United States
| | - Xiao Han
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Yuhan Dong
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Biyue Zhu
- Department of Pharmacy, Children's Hospital of Chongqing Medical University, Chongqing, 400014, China
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264000, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| |
Collapse
|
2
|
Boutry C, Moreau NN, Jaudet C, Lechippey L, Corroyer-Dulmont A. Machine learning and deep learning prediction of patient specific quality assurance in breast IMRT radiotherapy plans using Halcyon specific complexity indices. Radiother Oncol 2024; 200:110483. [PMID: 39159677 DOI: 10.1016/j.radonc.2024.110483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/05/2024] [Accepted: 08/14/2024] [Indexed: 08/21/2024]
Abstract
INTRODUCTION New radiotherapy machines such as Halcyon are capable of delivering dose-rate of 600 monitor-units per minute, allowing large numbers of patients treated per day. However, patient-specific quality assurance (QA) is still required, which dramatically decrease machine availability. Innovative artificial intelligence (AI) algorithms could predict QA result based on complexity metrics. However, no AI solution exists for Halcyon machines and the complexity metrics to be used have not been definitively determined. The aim of this study was to develop an AI solution capable of firstly determining the complexity indices to be obtained and secondly predicting patient-specific QA in a routine clinical setting. METHODS Three hundred and eighteen beams from 56 patients with breast cancer were used. The seven complexity indices named Modulation-Complexity-Score (MCS), Small-Aperture-Score (SAS10), Beam-Area (BA), Beam-Irregularity (BI), Beam-Modulation (BM), Gantry and Collimator angles were used as input to the AI model. Machine learning (ML) and deep learning (DL) models using tensorflow were set up to predict DreamDose QA conformance. RESULTS MCS, BI, gantry and collimator angle are not correlated with QA compliance. Therefore, ML and DL models were trained using SAS10, BA and BM complexity indices. ROC analyses enabled to find best predicted probability threshold to increase specificity and sensitivity. ML models did not show satisfactory performance with an area under-the-curve (AUC) of 0.75 and specificity and sensitivity of 0.88 and 0.86. However, optimised DL model showed better performance with an AUC of 0.95 and specificity and sensitivity of 0.98 and 0.97. CONCLUSION The DL model demonstrated a high degree of accuracy in its predictions of the quality assurance (QA) results. Our online predictive QA-platform offers significant time savings in terms of accelerator occupancy and working time.
Collapse
Affiliation(s)
- Christine Boutry
- Medical Physics Department, Centre François Baclesse, 14000 Caen, France
| | - Noémie N Moreau
- Medical Physics Department, Centre François Baclesse, 14000 Caen, France; Université de Caen Normandie, CNRS, Normandie Université, ISTCT UMR6030, GIP CYCERON, F-14000 Caen, France
| | - Cyril Jaudet
- Medical Physics Department, Centre François Baclesse, 14000 Caen, France
| | - Laetitia Lechippey
- Medical Physics Department, Centre François Baclesse, 14000 Caen, France
| | - Aurélien Corroyer-Dulmont
- Medical Physics Department, Centre François Baclesse, 14000 Caen, France; Université de Caen Normandie, CNRS, Normandie Université, ISTCT UMR6030, GIP CYCERON, F-14000 Caen, France.
| |
Collapse
|
3
|
Singh G, Singh A, Bae J, Manjila S, Spektor V, Prasanna P, Lignelli A. -New frontiers in domain-inspired radiomics and radiogenomics: increasing role of molecular diagnostics in CNS tumor classification and grading following WHO CNS-5 updates. Cancer Imaging 2024; 24:133. [PMID: 39375809 PMCID: PMC11460168 DOI: 10.1186/s40644-024-00769-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Accepted: 08/31/2024] [Indexed: 10/09/2024] Open
Abstract
Gliomas and Glioblastomas represent a significant portion of central nervous system (CNS) tumors associated with high mortality rates and variable prognosis. In 2021, the World Health Organization (WHO) updated its Glioma classification criteria, most notably incorporating molecular markers including CDKN2A/B homozygous deletion, TERT promoter mutation, EGFR amplification, + 7/-10 chromosome copy number changes, and others into the grading and classification of adult and pediatric Gliomas. The inclusion of these markers and the corresponding introduction of new Glioma subtypes has allowed for more specific tailoring of clinical interventions and has inspired a new wave of Radiogenomic studies seeking to leverage medical imaging information to explore the diagnostic and prognostic implications of these new biomarkers. Radiomics, deep learning, and combined approaches have enabled the development of powerful computational tools for MRI analysis correlating imaging characteristics with various molecular biomarkers integrated into the updated WHO CNS-5 guidelines. Recent studies have leveraged these methods to accurately classify Gliomas in accordance with these updated molecular-based criteria based solely on non-invasive MRI, demonstrating the great promise of Radiogenomic tools. In this review, we explore the relative benefits and drawbacks of these computational frameworks and highlight the technical and clinical innovations presented by recent studies in the landscape of fast evolving molecular-based Glioma subtyping. Furthermore, the potential benefits and challenges of incorporating these tools into routine radiological workflows, aiming to enhance patient care and optimize clinical outcomes in the evolving field of CNS tumor management, have been highlighted.
Collapse
Affiliation(s)
- Gagandeep Singh
- Neuroradiology Division, Columbia University Irving Medical Center, New York, NY, USA.
| | - Annie Singh
- Atal Bihari Vajpayee Institute of Medical Sciences, New Delhi, India
| | - Joseph Bae
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, USA
| | - Sunil Manjila
- Department of Neurological Surgery, Garden City Hospital, Garden City, MI, USA
| | - Vadim Spektor
- Neuroradiology Division, Columbia University Irving Medical Center, New York, NY, USA
| | - Prateek Prasanna
- Department of Neurological Surgery, Garden City Hospital, Garden City, MI, USA
| | - Angela Lignelli
- Neuroradiology Division, Columbia University Irving Medical Center, New York, NY, USA
| |
Collapse
|
4
|
Gomez F, Danos AM, Del Fiol G, Madabhushi A, Tiwari P, McMichael JF, Bakas S, Bian J, Davatzikos C, Fertig EJ, Kalpathy-Cramer J, Kenney J, Savova GK, Yetisgen M, Van Allen EM, Warner JL, Prior F, Griffith M, Griffith OL. A New Era of Data-Driven Cancer Research and Care: Opportunities and Challenges. Cancer Discov 2024; 14:1774-1778. [PMID: 39363742 PMCID: PMC11463721 DOI: 10.1158/2159-8290.cd-24-1130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 08/20/2024] [Accepted: 08/23/2024] [Indexed: 10/05/2024]
Abstract
People diagnosed with cancer and their formal and informal caregivers are increasingly faced with a deluge of complex information, thanks to rapid advancements in the type and volume of diagnostic, prognostic, and treatment data. This commentary discusses the opportunities and challenges that the society faces as we integrate large volumes of data into regular cancer care.
Collapse
Affiliation(s)
- Felicia Gomez
- Department of Medicine, Washington University School of Medicine, St Louis, Missouri.
| | - Arpad M. Danos
- Department of Medicine, Washington University School of Medicine, St Louis, Missouri.
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah.
| | - Anant Madabhushi
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia.
- Atlanta Veterans Affairs (VA) Medical Center, Decatur, Georgia.
| | - Pallavi Tiwari
- Department of Radiology and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin.
- William S. Middleton Memorial Veterans Affairs (VA) Healthcare, Madison, Wisconsin.
| | - Joshua F. McMichael
- Department of Medicine, Washington University School of Medicine, St Louis, Missouri.
| | - Spyridon Bakas
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, Indiana.
- Departments of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, Indiana.
- Departments of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, Indiana.
- Departments of Neurological Surgery, Indiana University School of Medicine, Indianapolis, Indiana.
- Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, Indiana.
| | - Jiang Bian
- Department of Health Outcomes & Biomedical Informatics, University of Florida, Gainesville, Florida.
| | - Christos Davatzikos
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania.
| | - Elana J. Fertig
- Department of Oncology and Applied Mathematics & Statistics, Johns Hopkins Medicine, Baltimore, Massachusetts.
| | | | - Johanna Kenney
- Technology Research Advocacy Partnership, National Cancer Institute, Bethesda, Maryland.
| | - Guergana K. Savova
- Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.
- Boston Children’s Hospital, Boston, Massachusetts.
| | - Meliha Yetisgen
- Department of Biomedical and Health Informatics, University of Washington, Seattle, Western Australia.
| | - Eliezer M. Van Allen
- Department of Medicine, Dana-Farber Cancer Institute, Harvard School of Medicine, Boston, Massachusetts.
- Broad Institute, Cambridge, Massachusetts.
- Parker Institute for Cancer Immunotherapy, San Francisco, California.
| | - Jeremy L. Warner
- Departments of Medicine and Biostatistics, Brown University, Providence, Rhode Island.
- Lifespan Cancer Institute, Rhode Island Hospital, Providence, Rhode Island.
| | - Fred Prior
- Department of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, Arkansas.
| | - Malachi Griffith
- Department of Medicine, Washington University School of Medicine, St Louis, Missouri.
| | - Obi L. Griffith
- Department of Medicine, Washington University School of Medicine, St Louis, Missouri.
| |
Collapse
|
5
|
Manthe M, Duffner S, Lartizien C. Federated brain tumor segmentation: An extensive benchmark. Med Image Anal 2024; 97:103270. [PMID: 39059241 DOI: 10.1016/j.media.2024.103270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 06/14/2024] [Accepted: 07/08/2024] [Indexed: 07/28/2024]
Abstract
Recently, federated learning has raised increasing interest in the medical image analysis field due to its ability to aggregate multi-center data with privacy-preserving properties. A large amount of federated training schemes have been published, which we categorize into global (one final model), personalized (one model per institution) or hybrid (one model per cluster of institutions) methods. However, their applicability on the recently published Federated Brain Tumor Segmentation 2022 dataset has not been explored yet. We propose an extensive benchmark of federated learning algorithms from all three classes on this task. While standard FedAvg already performs very well, we show that some methods from each category can bring a slight performance improvement and potentially limit the final model(s) bias toward the predominant data distribution of the federation. Moreover, we provide a deeper understanding of the behavior of federated learning on this task through alternative ways of distributing the pooled dataset among institutions, namely an Independent and Identical Distributed (IID) setup, and a limited data setup. Our code is available at (https://github.com/MatthisManthe/Benchmark_FeTS2022).
Collapse
Affiliation(s)
- Matthis Manthe
- INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France; INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, Centrale Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, F-69621 Villeurbanne, France.
| | - Stefan Duffner
- INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, Centrale Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, F-69621 Villeurbanne, France
| | - Carole Lartizien
- INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| |
Collapse
|
6
|
Gardner LL, Thompson SJ, O'Connor JD, McMahon SJ. Modelling radiobiology. Phys Med Biol 2024; 69:18TR01. [PMID: 39159658 DOI: 10.1088/1361-6560/ad70f0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 08/19/2024] [Indexed: 08/21/2024]
Abstract
Radiotherapy has played an essential role in cancer treatment for over a century, and remains one of the best-studied methods of cancer treatment. Because of its close links with the physical sciences, it has been the subject of extensive quantitative mathematical modelling, but a complete understanding of the mechanisms of radiotherapy has remained elusive. In part this is because of the complexity and range of scales involved in radiotherapy-from physical radiation interactions occurring over nanometres to evolution of patient responses over months and years. This review presents the current status and ongoing research in modelling radiotherapy responses across these scales, including basic physical mechanisms of DNA damage, the immediate biological responses this triggers, and genetic- and patient-level determinants of response. Finally, some of the major challenges in this field and potential avenues for future improvements are also discussed.
Collapse
Affiliation(s)
- Lydia L Gardner
- Patrick G Johnston Centre for Cancer Research, Queen's University Belfast, 97 Lisburn Road, Belfast BT9 7AE, United Kingdom
| | - Shannon J Thompson
- Patrick G Johnston Centre for Cancer Research, Queen's University Belfast, 97 Lisburn Road, Belfast BT9 7AE, United Kingdom
| | - John D O'Connor
- Patrick G Johnston Centre for Cancer Research, Queen's University Belfast, 97 Lisburn Road, Belfast BT9 7AE, United Kingdom
- Ulster University School of Engineering, York Street, Belfast BT15 1AP, United Kingdom
| | - Stephen J McMahon
- Patrick G Johnston Centre for Cancer Research, Queen's University Belfast, 97 Lisburn Road, Belfast BT9 7AE, United Kingdom
| |
Collapse
|
7
|
Lee EH, Han M, Wright J, Kuwabara M, Mevorach J, Fu G, Choudhury O, Ratan U, Zhang M, Wagner MW, Goetti R, Toescu S, Perreault S, Dogan H, Altinmakas E, Mohammadzadeh M, Szymanski KA, Campen CJ, Lai H, Eghbal A, Radmanesh A, Mankad K, Aquilina K, Said M, Vossough A, Oztekin O, Ertl-Wagner B, Poussaint T, Thompson EM, Ho CY, Jaju A, Curran J, Ramaswamy V, Cheshier SH, Grant GA, Wong SS, Moseley ME, Lober RM, Wilms M, Forkert ND, Vitanza NA, Miller JH, Prolo LM, Yeom KW. An international study presenting a federated learning AI platform for pediatric brain tumors. Nat Commun 2024; 15:7615. [PMID: 39223133 PMCID: PMC11368946 DOI: 10.1038/s41467-024-51172-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024] Open
Abstract
While multiple factors impact disease, artificial intelligence (AI) studies in medicine often use small, non-diverse patient cohorts due to data sharing and privacy issues. Federated learning (FL) has emerged as a solution, enabling training across hospitals without direct data sharing. Here, we present FL-PedBrain, an FL platform for pediatric posterior fossa brain tumors, and evaluate its performance on a diverse, realistic, multi-center cohort. Pediatric brain tumors were targeted due to the scarcity of such datasets, even in tertiary care hospitals. Our platform orchestrates federated training for joint tumor classification and segmentation across 19 international sites. FL-PedBrain exhibits less than a 1.5% decrease in classification and a 3% reduction in segmentation performance compared to centralized data training. FL boosts segmentation performance by 20 to 30% on three external, out-of-network sites. Finally, we explore the sources of data heterogeneity and examine FL robustness in real-world scenarios with data imbalances.
Collapse
Affiliation(s)
- Edward H Lee
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Radiology, Lucas Center, Stanford University, Stanford, CA, USA.
| | - Michelle Han
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA
- Department of Neurology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jason Wright
- Department of Radiology, Seattle Children's Hospital, Seattle, WA, USA
| | - Michael Kuwabara
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA
| | | | - Gang Fu
- Amazon Web Services, Seattle, WA, USA
| | | | | | - Michael Zhang
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA
| | - Matthias W Wagner
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Robert Goetti
- Department of Medical Imaging, The Children's Hospital at Westmead, Sydney, NSW, Australia
| | | | - Sebastien Perreault
- Division of Child Neurology, Department of Pediatrics, Centre Hospitalier Universitaire Sainte-Justine, Université de Montréal, Montreal, QC, Canada
| | - Hakan Dogan
- Department of Radiology, Koç University School of Medicine, Istanbul, Turkey
| | - Emre Altinmakas
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Kathryn A Szymanski
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA
- Creighton University School of Medicine-Phoenix Regional Campus, Phoenix, AZ, USA
| | - Cynthia J Campen
- Department of Neurology, Lucile Packard Children's Hospital, Stanford University Medical School, Palo Alto, CA, USA
| | - Hollie Lai
- Department of Radiology, Children's Hospital of Orange County, Orange, CA, USA
| | - Azam Eghbal
- Department of Radiology, Children's Hospital of Orange County, Orange, CA, USA
| | - Alireza Radmanesh
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Kaiser Los Angeles, Los Angeles, CA, USA
| | | | | | - Mourad Said
- Radiology Department, Centre International Carthage Médicale, Monastir, Tunisia
| | - Arastoo Vossough
- Department of Neurology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ozgur Oztekin
- Department of Neuroradiology, Tepecik Education and Research Hospital, Izmir, Turkey
- Hamad Medical Corporation, Doha, Qatar
| | - Birgit Ertl-Wagner
- Department of Diagnostic and Interventional Radiology, The Hospital for Sick Children, Toronto, ON, Canada
| | - Tina Poussaint
- Department of Radiology, Boston Children's Hospital, Boston, MA, USA
| | - Eric M Thompson
- Department of Neurosurgery, Duke Children's Hospital & Health Center, Durham, NC, USA
| | - Chang Y Ho
- Department of Radiology & Imaging Sciences, Riley Children's Hospital, Indianapolis, IN, USA
| | - Alok Jaju
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA
| | - John Curran
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA
| | - Vijay Ramaswamy
- Division of Haematology/Oncology, Department of Pediatrics, The Hospital for Sick Children, Toronto, ON, Canada
| | - Samuel H Cheshier
- Department of Neurosurgery, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Gerald A Grant
- Department of Neurosurgery, Duke Children's Hospital & Health Center, Durham, NC, USA
| | - S Simon Wong
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Michael E Moseley
- Department of Radiology, Lucas Center, Stanford University, Stanford, CA, USA
| | - Robert M Lober
- Division of Neurosurgery, Dayton Children's Hospital, Dayton, OH, USA
| | - Mattias Wilms
- Departments of Pediatrics, Community Health Sciences, and Radiology, University of Calgary, Calgary, AB, Canada
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Nils D Forkert
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Departments of Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
| | - Nicholas A Vitanza
- Ben Towne Center for Childhood Cancer Research, Seattle Children's Research Institute, Seattle, WA, USA
| | - Jeffrey H Miller
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA
| | - Laura M Prolo
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA.
| | - Kristen W Yeom
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Radiology, Phoenix Children's Hospital, Phoenix, AZ, USA.
| |
Collapse
|
8
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024; 17:725-738. [PMID: 39048847 PMCID: PMC11341643 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
9
|
Baker CR, Pease M, Sexton DP, Abumoussa A, Chambless LB. Artificial intelligence innovations in neurosurgical oncology: a narrative review. J Neurooncol 2024; 169:489-496. [PMID: 38958849 PMCID: PMC11341589 DOI: 10.1007/s11060-024-04757-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024]
Abstract
PURPOSE Artificial Intelligence (AI) has become increasingly integrated clinically within neurosurgical oncology. This report reviews the cutting-edge technologies impacting tumor treatment and outcomes. METHODS A rigorous literature search was performed with the aid of a research librarian to identify key articles referencing AI and related topics (machine learning (ML), computer vision (CV), augmented reality (AR), virtual reality (VR), etc.) for neurosurgical care of brain or spinal tumors. RESULTS Treatment of central nervous system (CNS) tumors is being improved through advances across AI-such as AL, CV, and AR/VR. AI aided diagnostic and prognostication tools can influence pre-operative patient experience, while automated tumor segmentation and total resection predictions aid surgical planning. Novel intra-operative tools can rapidly provide histopathologic tumor classification to streamline treatment strategies. Post-operative video analysis, paired with rich surgical simulations, can enhance training feedback and regimens. CONCLUSION While limited generalizability, bias, and patient data security are current concerns, the advent of federated learning, along with growing data consortiums, provides an avenue for increasingly safe, powerful, and effective AI platforms in the future.
Collapse
Affiliation(s)
- Clayton R Baker
- Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Matthew Pease
- Department of Neurosurgery, Indiana University, Indianapolis, IN, USA
| | - Daniel P Sexton
- Department of Neurosurgery, Duke University, Durham, NC, USA
| | - Andrew Abumoussa
- Department of Neurosurgery, University of North Carolina at Chapel Hill Hospitals, Chapel Hill, NC, USA
| | - Lola B Chambless
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
10
|
Ghosh S, Zhao X, Alim M, Brudno M, Bhat M. Artificial intelligence applied to 'omics data in liver disease: towards a personalised approach for diagnosis, prognosis and treatment. Gut 2024:gutjnl-2023-331740. [PMID: 39174307 DOI: 10.1136/gutjnl-2023-331740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/24/2024] [Indexed: 08/24/2024]
Abstract
Advancements in omics technologies and artificial intelligence (AI) methodologies are fuelling our progress towards personalised diagnosis, prognosis and treatment strategies in hepatology. This review provides a comprehensive overview of the current landscape of AI methods used for analysis of omics data in liver diseases. We present an overview of the prevalence of different omics levels across various liver diseases, as well as categorise the AI methodology used across the studies. Specifically, we highlight the predominance of transcriptomic and genomic profiling and the relatively sparse exploration of other levels such as the proteome and methylome, which represent untapped potential for novel insights. Publicly available database initiatives such as The Cancer Genome Atlas and The International Cancer Genome Consortium have paved the way for advancements in the diagnosis and treatment of hepatocellular carcinoma. However, the same availability of large omics datasets remains limited for other liver diseases. Furthermore, the application of sophisticated AI methods to handle the complexities of multiomics datasets requires substantial data to train and validate the models and faces challenges in achieving bias-free results with clinical utility. Strategies to address the paucity of data and capitalise on opportunities are discussed. Given the substantial global burden of chronic liver diseases, it is imperative that multicentre collaborations be established to generate large-scale omics data for early disease recognition and intervention. Exploring advanced AI methods is also necessary to maximise the potential of these datasets and improve early detection and personalised treatment strategies.
Collapse
Affiliation(s)
- Soumita Ghosh
- Transplant AI Initiative, Ajmera Transplant Program, University Health Network, Toronto, Ontario, Canada
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Xun Zhao
- Transplant AI Initiative, Ajmera Transplant Program, University Health Network, Toronto, Ontario, Canada
| | - Mouaid Alim
- Transplant AI Initiative, Ajmera Transplant Program, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Michael Brudno
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute of Artificial Intelligence, Toronto, Ontario, Canada
| | - Mamatha Bhat
- Transplant AI Initiative, Ajmera Transplant Program, University Health Network, Toronto, Ontario, Canada
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
- Division of Gastroenterology, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Toronto General Hospital Research Institute, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Weld A, Dixon L, Anichini G, Patel N, Nimer A, Dyck M, O'Neill K, Lim A, Giannarou S, Camp S. Challenges with segmenting intraoperative ultrasound for brain tumours. Acta Neurochir (Wien) 2024; 166:317. [PMID: 39090435 PMCID: PMC11294268 DOI: 10.1007/s00701-024-06179-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Accepted: 06/22/2024] [Indexed: 08/04/2024]
Abstract
Objective - Addressing the challenges that come with identifying and delineating brain tumours in intraoperative ultrasound. Our goal is to both qualitatively and quantitatively assess the interobserver variation, amongst experienced neuro-oncological intraoperative ultrasound users (neurosurgeons and neuroradiologists), in detecting and segmenting brain tumours on ultrasound. We then propose that, due to the inherent challenges of this task, annotation by localisation of the entire tumour mass with a bounding box could serve as an ancillary solution to segmentation for clinical training, encompassing margin uncertainty and the curation of large datasets. Methods - 30 ultrasound images of brain lesions in 30 patients were annotated by 4 annotators - 1 neuroradiologist and 3 neurosurgeons. The annotation variation of the 3 neurosurgeons was first measured, and then the annotations of each neurosurgeon were individually compared to the neuroradiologist's, which served as a reference standard as their segmentations were further refined by cross-reference to the preoperative magnetic resonance imaging (MRI). The following statistical metrics were used: Intersection Over Union (IoU), Sørensen-Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). These annotations were then converted into bounding boxes for the same evaluation. Results - There was a moderate level of interobserver variance between the neurosurgeons [ I o U : 0.789 , D S C : 0.876 , H D : 103.227 ] and a larger level of variance when compared against the MRI-informed reference standard annotations by the neuroradiologist, mean across annotators [ I o U : 0.723 , D S C : 0.813 , H D : 115.675 ] . After converting the segments to bounding boxes, all metrics improve, most significantly, the interquartile range drops by [ I o U : 37 % , D S C : 41 % , H D : 54 % ] . Conclusion - This study highlights the current challenges with detecting and defining tumour boundaries in neuro-oncological intraoperative brain ultrasound. We then show that bounding box annotation could serve as a useful complementary approach for both clinical and technical reasons.
Collapse
Affiliation(s)
- Alistair Weld
- Hamlyn Centre, Imperial College London, Exhibition Rd, London, SW7 2AZ, UK.
| | - Luke Dixon
- Department of Imaging, Charing Cross Hospital, Fulham Palace Rd, London, W6 8RF, UK
| | - Giulio Anichini
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, W6 8RF, London, UK
| | - Neekhil Patel
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, W6 8RF, London, UK
| | - Amr Nimer
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, W6 8RF, London, UK
| | - Michael Dyck
- School of Computation, Information and Technology, Technical University of Munich, Boltzmannstr. 3, Garching, 85748, Germany
| | - Kevin O'Neill
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, W6 8RF, London, UK
| | - Adrian Lim
- Department of Imaging, Charing Cross Hospital, Fulham Palace Rd, London, W6 8RF, UK
| | - Stamatia Giannarou
- Hamlyn Centre, Imperial College London, Exhibition Rd, London, SW7 2AZ, UK
| | - Sophie Camp
- Department of Neurosurgery, Charing Cross Hospital, Fulham Palace Rd, W6 8RF, London, UK
| |
Collapse
|
12
|
Cho H, Froelicher D, Dokmai N, Nandi A, Sadhuka S, Hong MM, Berger B. Privacy-Enhancing Technologies in Biomedical Data Science. Annu Rev Biomed Data Sci 2024; 7:317-343. [PMID: 39178425 PMCID: PMC11346580 DOI: 10.1146/annurev-biodatasci-120423-120107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Abstract
The rapidly growing scale and variety of biomedical data repositories raise important privacy concerns. Conventional frameworks for collecting and sharing human subject data offer limited privacy protection, often necessitating the creation of data silos. Privacy-enhancing technologies (PETs) promise to safeguard these data and broaden their usage by providing means to share and analyze sensitive data while protecting privacy. Here, we review prominent PETs and illustrate their role in advancing biomedicine. We describe key use cases of PETs and their latest technical advances and highlight recent applications of PETs in a range of biomedical domains. We conclude by discussing outstanding challenges and social considerations that need to be addressed to facilitate a broader adoption of PETs in biomedical data science.
Collapse
Affiliation(s)
- Hyunghoon Cho
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - David Froelicher
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Natnatee Dokmai
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - Anupama Nandi
- Department of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, USA;
| | - Shuvom Sadhuka
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Matthew M Hong
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Bonnie Berger
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
- Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| |
Collapse
|
13
|
Waqas A, Tripathi A, Ramachandran RP, Stewart PA, Rasool G. Multimodal data integration for oncology in the era of deep neural networks: a review. Front Artif Intell 2024; 7:1408843. [PMID: 39118787 PMCID: PMC11308435 DOI: 10.3389/frai.2024.1408843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 07/09/2024] [Indexed: 08/10/2024] Open
Abstract
Cancer research encompasses data across various scales, modalities, and resolutions, from screening and diagnostic imaging to digitized histopathology slides to various types of molecular data and clinical records. The integration of these diverse data types for personalized cancer care and predictive modeling holds the promise of enhancing the accuracy and reliability of cancer screening, diagnosis, and treatment. Traditional analytical methods, which often focus on isolated or unimodal information, fall short of capturing the complex and heterogeneous nature of cancer data. The advent of deep neural networks has spurred the development of sophisticated multimodal data fusion techniques capable of extracting and synthesizing information from disparate sources. Among these, Graph Neural Networks (GNNs) and Transformers have emerged as powerful tools for multimodal learning, demonstrating significant success. This review presents the foundational principles of multimodal learning including oncology data modalities, taxonomy of multimodal learning, and fusion strategies. We delve into the recent advancements in GNNs and Transformers for the fusion of multimodal data in oncology, spotlighting key studies and their pivotal findings. We discuss the unique challenges of multimodal learning, such as data heterogeneity and integration complexities, alongside the opportunities it presents for a more nuanced and comprehensive understanding of cancer. Finally, we present some of the latest comprehensive multimodal pan-cancer data sources. By surveying the landscape of multimodal data integration in oncology, our goal is to underline the transformative potential of multimodal GNNs and Transformers. Through technological advancements and the methodological innovations presented in this review, we aim to chart a course for future research in this promising field. This review may be the first that highlights the current state of multimodal modeling applications in cancer using GNNs and transformers, presents comprehensive multimodal oncology data sources, and sets the stage for multimodal evolution, encouraging further exploration and development in personalized cancer care.
Collapse
Affiliation(s)
- Asim Waqas
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, United States
| | - Aakash Tripathi
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States
| | - Ravi P. Ramachandran
- Department of Electrical and Computer Engineering, Rowan University, Glassboro, NJ, United States
| | - Paul A. Stewart
- Department of Biostatistics and Bioinformatics, Moffitt Cancer Center, Tampa, FL, United States
| | - Ghulam Rasool
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, United States
| |
Collapse
|
14
|
Pati S, Kumar S, Varma A, Edwards B, Lu C, Qu L, Wang JJ, Lakshminarayanan A, Wang SH, Sheller MJ, Chang K, Singh P, Rubin DL, Kalpathy-Cramer J, Bakas S. Privacy preservation for federated learning in health care. PATTERNS (NEW YORK, N.Y.) 2024; 5:100974. [PMID: 39081567 PMCID: PMC11284498 DOI: 10.1016/j.patter.2024.100974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Artificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns. Federated learning (FL) allows for multi-institutional training of AI models, obviating data sharing, albeit with different security and privacy concerns. Specifically, insights exchanged during FL can leak information about institutional data. In addition, FL can introduce issues when there is limited trust among the entities performing the compute. With the growing adoption of FL in health care, it is imperative to elucidate the potential risks. We thus summarize privacy-preserving FL literature in this work with special regard to health care. We draw attention to threats and review mitigation approaches. We anticipate this review to become a health-care researcher's guide to security and privacy in FL.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Sourav Kumar
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Amokh Varma
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | | | - Charles Lu
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women’s Hospital, Boston, MA, USA
| | - Liangqiong Qu
- Department of Statistics and Actuarial Science, University of Hong Kong, Hong Kong, China
| | - Justin J. Wang
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics), Stanford University, Stanford, CA, USA
| | | | | | | | - Ken Chang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Praveer Singh
- University of Colorado School of Medicine, Aurora, CO, USA
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics), Stanford University, Stanford, CA, USA
| | | | - Spyridon Bakas
- Center for Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Neurological Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
- Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
15
|
Li L, Xiao F, Wang S, Kuang S, Li Z, Zhong Y, Xu D, Cai Y, Li S, Chen J, Liu Y, Li J, Li H, Xu H. Preoperative prediction of MGMT promoter methylation in glioblastoma based on multiregional and multi-sequence MRI radiomics analysis. Sci Rep 2024; 14:16031. [PMID: 38992201 PMCID: PMC11239670 DOI: 10.1038/s41598-024-66653-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 07/03/2024] [Indexed: 07/13/2024] Open
Abstract
O6-methylguanine-DNA methyltransferase (MGMT) has been demonstrated to be an important prognostic and predictive marker in glioblastoma (GBM). To establish a reliable radiomics model based on MRI data to predict the MGMT promoter methylation status of GBM. A total of 183 patients with glioblastoma were included in this retrospective study. The visually accessible Rembrandt images (VASARI) features were extracted for each patient, and a total of 14676 multi-region features were extracted from enhanced, necrotic, "non-enhanced, and edematous" areas on their multiparametric MRI. Twelve individual radiomics models were constructed based on the radiomics features from different subregions and different sequences. Four single-sequence models, three single-region models and the combined radiomics model combining all individual models were constructed. Finally, the predictive performance of adding clinical factors and VASARI characteristics was evaluated. The ComRad model combining all individual radiomics models exhibited the best performance in test set 1 and test set 2, with the area under the receiver operating characteristic curve (AUC) of 0.839 (0.709-0.963) and 0.739 (0.581-0.897), respectively. The results indicated that the radiomics model combining multi-region and multi-parametric MRI features has exhibited promising performance in predicting MGMT methylation status in GBM. The Modeling scheme that combining all individual radiomics models showed best performance among all constructed moels.
Collapse
Affiliation(s)
- Lanqing Li
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Feng Xiao
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Shouchao Wang
- Department of Radiology, Sir Run Run Shaw Hospital (SRRSH) of School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shengyu Kuang
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Zhiqiang Li
- Department of Neurosurgery&Brain Glioma Center, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Yahua Zhong
- Department of Oncology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Dan Xu
- Department of Nuclear Medicine, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Yuxiang Cai
- Department of Pathology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Sirui Li
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Jun Chen
- Wuhan GE Healthcare, Wuhan, China
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Junjie Li
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Huan Li
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China.
| | - Haibo Xu
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
16
|
Wahid KA, Cardenas CE, Marquez B, Netherton TJ, Kann BH, Court LE, He R, Naser MA, Moreno AC, Fuller CD, Fuentes D. Evolving Horizons in Radiation Therapy Auto-Contouring: Distilling Insights, Embracing Data-Centric Frameworks, and Moving Beyond Geometric Quantification. Adv Radiat Oncol 2024; 9:101521. [PMID: 38799110 PMCID: PMC11111585 DOI: 10.1016/j.adro.2024.101521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/26/2024] [Indexed: 05/29/2024] Open
Affiliation(s)
- Kareem A. Wahid
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Carlos E. Cardenas
- Department of Radiation Oncology, University of Alabama at Birmingham, Birmingham, Alabama
| | - Barbara Marquez
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, Texas
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Benjamin H. Kann
- Department of Radiation Oncology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Amy C. Moreno
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
17
|
Van Coillie S, Prévot J, Sánchez-Ramón S, Lowe DM, Borg M, Autran B, Segundo G, Pecoraro A, Garcelon N, Boersma C, Silva SL, Drabwell J, Quinti I, Meyts I, Ali A, Burns SO, van Hagen M, Pergent M, Mahlaoui N. Charting a course for global progress in PIDs by 2030 - proceedings from the IPOPI global multi-stakeholders' summit (September 2023). Front Immunol 2024; 15:1430678. [PMID: 39055704 PMCID: PMC11270239 DOI: 10.3389/fimmu.2024.1430678] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 06/13/2024] [Indexed: 07/27/2024] Open
Abstract
The International Patient Organisation for Primary Immunodeficiencies (IPOPI) held its second Global Multi-Stakeholders' Summit, an annual stimulating and forward-thinking meeting uniting experts to anticipate pivotal upcoming challenges and opportunities in the field of primary immunodeficiency (PID). The 2023 summit focused on three key identified discussion points: (i) How can immunoglobulin (Ig) therapy meet future personalized patient needs? (ii) Pandemic preparedness: what's next for public health and potential challenges for the PID community? (iii) Diagnosing PIDs in 2030: what needs to happen to diagnose better and to diagnose more? Clinician-Scientists, patient representatives and other stakeholders explored avenues to improve Ig therapy through mechanistic insights and tailored Ig preparations/products according to patient-specific needs and local exposure to infectious agents, amongst others. Urgency for pandemic preparedness was discussed, as was the threat of shortage of antibiotics and increasing antimicrobial resistance, emphasizing the need for representation of PID patients and other vulnerable populations throughout crisis and care management. Discussion also covered the complexities of PID diagnosis, addressing issues such as global diagnostic disparities, the integration of patient-reported outcome measures, and the potential of artificial intelligence to increase PID diagnosis rates and to enhance diagnostic precision. These proceedings outline the outcomes and recommendations arising from the 2023 IPOPI Global Multi-Stakeholders' Summit, offering valuable insights to inform future strategies in PID management and care. Integral to this initiative is its role in fostering collaborative efforts among stakeholders to prepare for the multiple challenges facing the global PID community.
Collapse
Affiliation(s)
- Samya Van Coillie
- International Patient Organisation for Primary Immunodeficiencies (IPOPI), Brussels, Belgium
| | - Johan Prévot
- International Patient Organisation for Primary Immunodeficiencies (IPOPI), Brussels, Belgium
| | - Silvia Sánchez-Ramón
- Department of Clinical Immunology, Health Research Institute of the Hospital Clínico San Carlos/Fundación para la Investigación Biomédica del Hospital Clínico San Carlos (IML and IdISSC), Health Research Institute of the Hospital Clínico San Carlos (IdISSC), Madrid, Spain
| | - David M. Lowe
- Department of Immunology, Royal Free London National Heath System (NHS) Foundation Trust, London, United Kingdom
- Institute of Immunity and Transplantation, University College London, London, United Kingdom
| | - Michael Borg
- Department of Infection Control & Sterile Services, Mater Dei Hospital, Msida, Malta
| | - Brigitte Autran
- Sorbonne-Université, Cimi-Paris, Institut national de la santé et de la recherche médicale (INSERM) U1135, centre national de la recherche scientifique (CNRS) ERL8255, Université Pierre et Marie Curie Centre de Recherche n°7 (UPMC CR7), Paris, France
| | - Gesmar Segundo
- Departamento de Pediatra, Universidade Federal de Uberlândia, Uberlandia, MG, Brazil
| | - Antonio Pecoraro
- Transfusion Medicine Unit, Azienda Sanitaria Territoriale, Ascoli Piceno, Italy
| | - Nicolas Garcelon
- Université de Paris, Imagine Institute, Data Science Platform, Institut national de la santé et de la recherche médicale Unité Mixte de Recherche (INSERM UMR) 1163, Paris, France
| | - Cornelis Boersma
- Health-Ecore B.V., Zeist, Netherlands
- Unit of Global Health, Department of Health Sciences, University Medical Center Groningen (UMCG), University of Groningen, Groningen, Netherlands
- Department of Management Sciences, Open University, Heerlen, Netherlands
| | - Susana L. Silva
- Serviço de Imunoalergologia, Unidade Local de Saúde de Santa Maria, Lisbon, Portugal
- Instituto de Medicina Molecular João Lobo Antunes, Faculdade de Medicina, Universidade de Lisboa, Lisbon, Portugal
| | - Jose Drabwell
- International Patient Organisation for Primary Immunodeficiencies (IPOPI), Brussels, Belgium
| | - Isabella Quinti
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| | - Isabelle Meyts
- Department of Pediatrics, University Hospitals Leuven, Department of Microbiology, Immunology and Transplantation, Katholieke Universiteit (KU) Leuven, Leuven, Belgium
| | - Adli Ali
- Department of Paediatrics, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
- Hospital Tunku Ampuan Besar Tuanku Aishah Rohani, Universiti Kebangsaan Malaysia (UKM) Specialist Children’s Hospital, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Siobhan O. Burns
- Department of Immunology, Royal Free London National Heath System (NHS) Foundation Trust, London, United Kingdom
- Institute of Immunity and Transplantation, University College London, London, United Kingdom
| | - Martin van Hagen
- Department of Internal Medicine, Division of Allergy & Clinical Immunology, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
- Department of Immunology, Erasmus University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Martine Pergent
- International Patient Organisation for Primary Immunodeficiencies (IPOPI), Brussels, Belgium
| | - Nizar Mahlaoui
- Pediatric Hematology-Immunology and Rheumatology Unit, Necker-Enfants malades University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France
- French National Reference Center for Primary Immune Deficiencies (CEREDIH), Necker-Enfants malades University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France
| |
Collapse
|
18
|
Liu Y, Huang J, Chen JC, Chen W, Pan Y, Qiu J. Predicting treatment response in multicenter non-small cell lung cancer patients based on federated learning. BMC Cancer 2024; 24:688. [PMID: 38840081 PMCID: PMC11155008 DOI: 10.1186/s12885-024-12456-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 05/30/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Multicenter non-small cell lung cancer (NSCLC) patient data is information-rich. However, its direct integration becomes exceptionally challenging due to constraints involving different healthcare organizations and regulations. Traditional centralized machine learning methods require centralizing these sensitive medical data for training, posing risks of patient privacy leakage and data security issues. In this context, federated learning (FL) has attracted much attention as a distributed machine learning framework. It effectively addresses this contradiction by preserving data locally, conducting local model training, and aggregating model parameters. This approach enables the utilization of multicenter data with maximum benefit while ensuring privacy safeguards. Based on pre-radiotherapy planning target volume images of NSCLC patients, a multicenter treatment response prediction model is designed by FL for predicting the probability of remission of NSCLC patients. This approach ensures medical data privacy, high prediction accuracy and computing efficiency, offering valuable insights for clinical decision-making. METHODS We retrospectively collected CT images from 245 NSCLC patients undergoing chemotherapy and radiotherapy (CRT) in four Chinese hospitals. In a simulation environment, we compared the performance of the centralized deep learning (DL) model with that of the FL model using data from two sites. Additionally, due to the unavailability of data from one hospital, we established a real-world FL model using data from three sites. Assessments were conducted using measures such as accuracy, receiver operating characteristic curve, and confusion matrices. RESULTS The model's prediction performance obtained using FL methods outperforms that of traditional centralized learning methods. In the comparative experiment, the DL model achieves an AUC of 0.718/0.695, while the FL model demonstrates an AUC of 0.725/0.689, with real-world FL model achieving an AUC of 0.698/0.672. CONCLUSIONS We demonstrate that the performance of a FL predictive model, developed by combining convolutional neural networks (CNNs) with data from multiple medical centers, is comparable to that of a traditional DL model obtained through centralized training. It can efficiently predict CRT treatment response in NSCLC patients while preserving privacy.
Collapse
Affiliation(s)
- Yuan Liu
- School of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Taian, China
| | - Jinzao Huang
- Department of Radiology, Cathay General Hospital, Taipei, China
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao- Tung University, Taipei, China
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao- Tung University, Taipei, China
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, China
| | - Wei Chen
- School of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Taian, China
| | - Yuteng Pan
- School of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Taian, China
| | - Jianfeng Qiu
- School of Radiology, Second Affiliated Hospital of Shandong First Medical University and Shandong Academy of Medical Sciences, Taian, China.
| |
Collapse
|
19
|
Haskell-Mendoza AP, Reason EH, Gonzalez AT, Jackson JD, Sankey EW, Srinivasan ES, Herndon JE, Fecci PE, Calabrese E. Automated segmentation of ablated lesions using deep convolutional neural networks: A basis for response assessment following laser interstitial thermal therapy. Neuro Oncol 2024; 26:1152-1162. [PMID: 38170451 PMCID: PMC11145442 DOI: 10.1093/neuonc/noad261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Laser interstitial thermal therapy (LITT) of intracranial tumors or radiation necrosis enables tissue diagnosis, cytoreduction, and rapid return to systemic therapies. Ablated tissue remains in situ, resulting in characteristic post-LITT edema associated with transient clinical worsening and complicating post-LITT response assessment. METHODS All patients receiving LITT at a single center for tumors or radiation necrosis from 2015 to 2023 with ≥9 months of MRI follow-up were included. An nnU-Net segmentation model was trained to automatically segment contrast-enhancing lesion volume (CeLV) of LITT-treated lesions on T1-weighted images. Response assessment was performed using volumetric measurements. RESULTS Three hundred and eighty four unique MRI exams of 61 LITT-treated lesions and 6 control cases of medically managed radiation necrosis were analyzed. Automated segmentation was accurate in 367/384 (95.6%) images. CeLV increased to a median of 68.3% (IQR 35.1-109.2%) from baseline at 1-3 months from LITT (P = 0.0012) and returned to baseline thereafter. Overall survival (OS) for LITT-treated patients was 39.1 (9.2-93.4) months. Lesion expansion above 40% from volumetric nadir or baseline was considered volumetric progression. Twenty-one of 56 (37.5%) patients experienced progression for a volumetric progression-free survival of 21.4 (6.0-93.4) months. Patients with volumetric progression had worse OS (17.3 vs 62.1 months, P = 0.0015). CONCLUSIONS Post-LITT CeLV expansion is quantifiable and resolves within 6 months of LITT. Development of response assessment criteria for LITT-treated lesions is feasible and should be considered for clinical trials. Automated lesion segmentation could speed the adoption of volumetric response criteria in clinical practice.
Collapse
Affiliation(s)
| | - Ellery H Reason
- Duke University School of Medicine, Durham, North Carolina, USA
| | | | - Joshua D Jackson
- Department of Neurosurgery, Duke University Medical Center, Durham, North Carolina, USA
| | - Eric W Sankey
- Department of Neurosurgery, Piedmont Athens Regional Medical Center, Athens, Georgia, USA
| | - Ethan S Srinivasan
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - James E Herndon
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Peter E Fecci
- The Preston Robert Tisch Brain Tumor Center, Department of Neurosurgery, Duke University Medical Center, Durham, North Carolina, USA
| | - Evan Calabrese
- Department of Radiology, Division of Neuroradiology, Duke University Medical Center, Durham, North Carolina, USA
| |
Collapse
|
20
|
Xiao T, Kong S, Zhang Z, Hua D, Liu F. A review of big data technology and its application in cancer care. Comput Biol Med 2024; 176:108577. [PMID: 38739981 DOI: 10.1016/j.compbiomed.2024.108577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 05/07/2024] [Accepted: 05/07/2024] [Indexed: 05/16/2024]
Abstract
The development of modern medical devices and information technology has led to a rapid growth in the amount of data available for health protection information, with the concept of medical big data emerging globally, along with significant advances in cancer care relying on data-driven approaches. However, outstanding issues such as fragmented data governance, low-quality data specification, and data lock-in still make sharing challenging. Big data technology provides solutions for managing massive heterogeneous data while combining artificial intelligence (AI) techniques such as machine learning (ML) and deep learning (DL) to better mine the intrinsic connections between data. This paper surveys and organizes recent articles on big data technology and its applications in cancer, dividing them into three different types to outline their primary content and summarize their critical role in assisting cancer care. It then examines the latest research directions in big data technology in cancer and evaluates the current state of development of each type of application. Finally, current challenges and opportunities are discussed, and recommendations are made for the further integration of big data technology into the medical industry in the future.
Collapse
Affiliation(s)
- Tianyun Xiao
- Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, Hebei, 063210, China; The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, Hebei, 063210, China; College of Science, North China University of Science and Technology, Tangshan, Hebei, 063210, China
| | - Shanshan Kong
- College of Science, North China University of Science and Technology, Tangshan, Hebei, 063210, China.
| | - Zichen Zhang
- Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, Hebei, 063210, China; The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, Hebei, 063210, China; College of Science, North China University of Science and Technology, Tangshan, Hebei, 063210, China
| | - Dianbo Hua
- Beijing Sitairui Cancer Data Analysis Joint Laboratory, Beijing, 101149, China
| | - Fengchun Liu
- Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, Hebei, 063210, China; The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, Hebei, 063210, China; College of Science, North China University of Science and Technology, Tangshan, Hebei, 063210, China; Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, Hebei, China; Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, Hebei, China
| |
Collapse
|
21
|
Nordblom N, Büttner M, Schwendicke F. Artificial Intelligence in Orthodontics: Critical Review. J Dent Res 2024; 103:577-584. [PMID: 38682436 PMCID: PMC11118788 DOI: 10.1177/00220345241235606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
With increasing digitalization in orthodontics, certain orthodontic manufacturing processes such as the fabrication of indirect bonding trays, aligner production, or wire bending can be automated. However, orthodontic treatment planning and evaluation remains a specialist's task and responsibility. As the prediction of growth in orthodontic patients and response to orthodontic treatment is inherently complex and individual, orthodontists make use of features gathered from longitudinal, multimodal, and standardized orthodontic data sets. Currently, these data sets are used by the orthodontist to make informed, rule-based treatment decisions. In research, artificial intelligence (AI) has been successfully applied to assist orthodontists with the extraction of relevant data from such data sets. Here, AI has been applied for the analysis of clinical imagery, such as automated landmark detection in lateral cephalograms but also for evaluation of intraoral scans or photographic data. Furthermore, AI is applied to help orthodontists with decision support for treatment decisions such as the need for orthognathic surgery or for orthodontic tooth extractions. One major challenge in current AI research in orthodontics is the limited generalizability, as most studies use unicentric data with high risks of bias. Moreover, comparing AI across different studies and tasks is virtually impossible as both outcomes and outcome metrics vary widely, and underlying data sets are not standardized. Notably, only few AI applications in orthodontics have reached full clinical maturity and regulatory approval, and researchers in the field are tasked with tackling real-world evaluation and implementation of AI into the orthodontic workflow.
Collapse
Affiliation(s)
- N.F. Nordblom
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - M. Büttner
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - F. Schwendicke
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University of Munich, Munich, Germany
| |
Collapse
|
22
|
Zhou J, Wang X, Li Y, Yang Y, Shi J. Federated-learning-based prognosis assessment model for acute pulmonary thromboembolism. BMC Med Inform Decis Mak 2024; 24:141. [PMID: 38802861 PMCID: PMC11131248 DOI: 10.1186/s12911-024-02543-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/17/2024] [Indexed: 05/29/2024] Open
Abstract
BACKGROUND Acute pulmonary thromboembolism (PTE) is a common cardiovascular disease and recognizing low prognosis risk patients with PTE accurately is significant for clinical treatment. This study evaluated the value of federated learning (FL) technology in PTE prognosis risk assessment while ensuring the security of clinical data. METHODS A retrospective dataset consisted of PTE patients from 12 hospitals were collected, and 19 physical indicators of patients were included to train the FL-based prognosis assessment model to predict the 30-day death event. Firstly, multiple machine learning methods based on FL were compared to choose the superior model. And then performance of models trained on the independent (IID) and non-independent identical distributed(Non-IID) datasets was calculated and they were tested further on Real-world data. Besides, the optimal model was compared with pulmonary embolism severity index (PESI), simplified PESI (sPESI), Peking Union Medical College Hospital (PUMCH). RESULTS The area under the receiver operating characteristic curve (AUC) of logistic regression(0.842) outperformed convolutional neural network (0.819) and multi layer perceptron (0.784). Under IID, AUC of model trained using FL(Fed) on the training, validation and test sets was 0.852 ± 0.002, 0.867 ± 0.012 and 0.829 ± 0.004. Under Real-world, AUC of Fed was 0.855 ± 0.005, 0.882 ± 0.003 and 0.835 ± 0.005. Under IID and Real-world, AUC of Fed surpassed centralization model(NonFed) (0.847 ± 0.001, 0.841 ± 0.001 and 0.811 ± 0.001). Under Non-IID, although AUC of Fed (0.846 ± 0.047) outperformed NonFed (0.841 ± 0.001) on validation set, it (0.821 ± 0.016 and 0.799 ± 0.031) slightly lagged behind NonFed (0.847 ± 0.001 and 0.811 ± 0.001) on the training and test sets. In practice, AUC of Fed (0.853, 0.884 and 0.842) outshone PESI (0.812, 0.789 and 0.791), sPESI (0.817, 0.770 and 0.786) and PUMCH(0.848, 0.814 and 0.832) on the training, validation and test sets. Additionally, Fed (0.842) exhibited higher AUC values across test sets compared to those trained directly on the clients (0.758, 0.801, 0.783, 0.741, 0.788). CONCLUSIONS In this study, the FL based machine learning model demonstrated commendable efficacy on PTE prognostic risk prediction, rendering it well-suited for deployment in hospitals.
Collapse
Affiliation(s)
- Jun Zhou
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xin Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Beijing, China
| | - Yiyao Li
- Department of Pulmonary and Critical Care Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Yuqing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China.
| | - Juhong Shi
- Department of Pulmonary and Critical Care Medicine, Peking Union Medical College Hospital, Beijing, China.
| |
Collapse
|
23
|
Baheti B, Innani S, Nasrallah M, Bakas S. Prognostic stratification of glioblastoma patients by unsupervised clustering of morphology patterns on whole slide images furthering our disease understanding. Front Neurosci 2024; 18:1304191. [PMID: 38831756 PMCID: PMC11146603 DOI: 10.3389/fnins.2024.1304191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/25/2024] [Indexed: 06/05/2024] Open
Abstract
Introduction Glioblastoma (GBM) is a highly aggressive malignant tumor of the central nervous system that displays varying molecular and morphological profiles, leading to challenging prognostic assessments. Stratifying GBM patients according to overall survival (OS) from H&E-stained whole slide images (WSI) using advanced computational methods is challenging, but with direct clinical implications. Methods This work is focusing on GBM (IDH-wildtype, CNS WHO Gr.4) cases, identified from the TCGA-GBM and TCGA-LGG collections after considering the 2021 WHO classification criteria. The proposed approach starts with patch extraction in each WSI, followed by comprehensive patch-level curation to discard artifactual content, i.e., glass reflections, pen markings, dust on the slide, and tissue tearing. Each patch is then computationally described as a feature vector defined by a pre-trained VGG16 convolutional neural network. Principal component analysis provides a feature representation of reduced dimensionality, further facilitating identification of distinct groups of morphology patterns, via unsupervised k-means clustering. Results The optimal number of clusters, according to cluster reproducibility and separability, is automatically determined based on the rand index and silhouette coefficient, respectively. Our proposed approach achieved prognostic stratification accuracy of 83.33% on a multi-institutional independent unseen hold-out test set with sensitivity and specificity of 83.33%. Discussion We hypothesize that the quantification of these clusters of morphology patterns, reflect the tumor's spatial heterogeneity and yield prognostic relevant information to distinguish between short and long survivors using a decision tree classifier. The interpretability analysis of the obtained results can contribute to furthering and quantifying our understanding of GBM and potentially improving our diagnostic and prognostic predictions.
Collapse
Affiliation(s)
- Bhakti Baheti
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Shubham Innani
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - MacLean Nasrallah
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, IN, United States
| |
Collapse
|
24
|
Ellison J, Caliva F, Damasceno P, Luks TL, LaFontaine M, Cluceru J, Kemisetti A, Li Y, Molinaro AM, Pedoia V, Villanueva-Meyer JE, Lupo JM. Improving the Generalizability of Deep Learning for T2-Lesion Segmentation of Gliomas in the Post-Treatment Setting. Bioengineering (Basel) 2024; 11:497. [PMID: 38790363 PMCID: PMC11117752 DOI: 10.3390/bioengineering11050497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/24/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
Although fully automated volumetric approaches for monitoring brain tumor response have many advantages, most available deep learning models are optimized for highly curated, multi-contrast MRI from newly diagnosed gliomas, which are not representative of post-treatment cases in the clinic. Improving segmentation for treated patients is critical to accurately tracking changes in response to therapy. We investigated mixing data from newly diagnosed (n = 208) and treated (n = 221) gliomas in training, applying transfer learning (TL) from pre- to post-treatment imaging domains, and incorporating spatial regularization for T2-lesion segmentation using only T2 FLAIR images as input to improve generalization post-treatment. These approaches were evaluated on 24 patients suspected of progression who had received prior treatment. Including 26% of treated patients in training improved performance by 13.9%, and including more treated and untreated patients resulted in minimal changes. Fine-tuning with treated glioma improved sensitivity compared to data mixing by 2.5% (p < 0.05), and spatial regularization further improved performance when used with TL by 95th HD, Dice, and sensitivity (6.8%, 0.8%, 2.2%; p < 0.05). While training with ≥60 treated patients yielded the majority of performance gain, TL and spatial regularization further improved T2-lesion segmentation to treated gliomas using a single MR contrast and minimal processing, demonstrating clinical utility in response assessment.
Collapse
Affiliation(s)
- Jacob Ellison
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| | - Francesco Caliva
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Pablo Damasceno
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Tracy L. Luks
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Marisa LaFontaine
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Julia Cluceru
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Anil Kemisetti
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
| | - Yan Li
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | | | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| | - Javier E. Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
| | - Janine M. Lupo
- Department of Radiology and Biomedical Imaging, UCSF, San Francisco, CA 94143, USA; (J.E.); (F.C.); (P.D.); (T.L.L.); (M.L.); (J.C.); (A.K.); (Y.L.); (V.P.); (J.E.V.-M.)
- Center for Intelligent Imaging, UCSF, San Francisco, CA 94143, USA
- UCSF/UC Berkeley Graduate Program in Bioengineering, San Francisco, CA 94143, USA
| |
Collapse
|
25
|
LaBella D, Khanna O, McBurney-Lin S, Mclean R, Nedelec P, Rashid AS, Tahon NH, Altes T, Baid U, Bhalerao R, Dhemesh Y, Floyd S, Godfrey D, Hilal F, Janas A, Kazerooni A, Kent C, Kirkpatrick J, Kofler F, Leu K, Maleki N, Menze B, Pajot M, Reitman ZJ, Rudie JD, Saluja R, Velichko Y, Wang C, Warman PI, Sollmann N, Diffley D, Nandolia KK, Warren DI, Hussain A, Fehringer JP, Bronstein Y, Deptula L, Stein EG, Taherzadeh M, Portela de Oliveira E, Haughey A, Kontzialis M, Saba L, Turner B, Brüßeler MMT, Ansari S, Gkampenis A, Weiss DM, Mansour A, Shawali IH, Yordanov N, Stein JM, Hourani R, Moshebah MY, Abouelatta AM, Rizvi T, Willms K, Martin DC, Okar A, D'Anna G, Taha A, Sharifi Y, Faghani S, Kite D, Pinho M, Haider MA, Alonso-Basanta M, Villanueva-Meyer J, Rauschecker AM, Nada A, Aboian M, Flanders A, Bakas S, Calabrese E. A multi-institutional meningioma MRI dataset for automated multi-sequence image segmentation. Sci Data 2024; 11:496. [PMID: 38750041 PMCID: PMC11096318 DOI: 10.1038/s41597-024-03350-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Meningiomas are the most common primary intracranial tumors and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on brain MRI for diagnosis, treatment planning, and longitudinal treatment monitoring. However, automated, objective, and quantitative tools for non-invasive assessment of meningiomas on multi-sequence MR images are not available. Here we present the BraTS Pre-operative Meningioma Dataset, as the largest multi-institutional expert annotated multilabel meningioma multi-sequence MR image dataset to date. This dataset includes 1,141 multi-sequence MR images from six sites, each with four structural MRI sequences (T2-, T2/FLAIR-, pre-contrast T1-, and post-contrast T1-weighted) accompanied by expert manually refined segmentations of three distinct meningioma sub-compartments: enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Basic demographic data are provided including age at time of initial imaging, sex, and CNS WHO grade. The goal of releasing this dataset is to facilitate the development of automated computational methods for meningioma segmentation and expedite their incorporation into clinical practice, ultimately targeting improvement in the care of meningioma patients.
Collapse
Affiliation(s)
- Dominic LaBella
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Omaditya Khanna
- Department of Neurosurgery, Thomas Jefferson University, Philadelphia, PA, USA
| | - Shan McBurney-Lin
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | | | - Pierre Nedelec
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | - Arif S Rashid
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Ujjwal Baid
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Radhika Bhalerao
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | | | - Scott Floyd
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Devon Godfrey
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | | | | | - Anahita Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Collin Kent
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - John Kirkpatrick
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Florian Kofler
- Helmholtz AI, Helmholtz Munich, Neuherberg, Germany
- Department of Computer Science, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Tech nical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Kevin Leu
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | | | | | - Maxence Pajot
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | - Zachary J Reitman
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jeffrey D Rudie
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Rachit Saluja
- Department of Radiology, Cornell University, Ithaca, NY, USA
| | - Yury Velichko
- Department of Radiology, Northwestern University, Evanston, IL, USA
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Pranav I Warman
- Duke University Medical Center, School of Medicine, Durham, NC, USA
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | | | - Khanak K Nandolia
- Department of Diagnostic and Interventional Radiology, All India Institute of Medical Sciences, Rishikesh, India
| | - Daniel I Warren
- Department of Neuroradiology, Washington University, St. Louis, MO, USA
| | - Ali Hussain
- University of Rochester Medical Center, Rochester, NY, USA
| | - John Pascal Fehringer
- Faculty of Medicine, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | | | - Lisa Deptula
- Ross University School of Medicine, Bridgetown, Barbados
| | - Evan G Stein
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| | | | | | - Aoife Haughey
- Department of Neuroradiology, JDMI, University of Toronto, Toronto, TO, Canada
| | | | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria of Cagliari-Polo di Monserrato, Cagliari, Italy
| | | | | | | | | | - David Maximilian Weiss
- Department of Neuroradiology, University Hospital Essen, Essen, North Rhine-Westphalia, Germany
| | | | - Islam H Shawali
- Department of Radiology, Kasr Alainy, Cairo University, Cairo, Egypt
| | - Nikolay Yordanov
- Faculty of Medicine, Medical University of Sofia, Sofia, Bulgaria
| | - Joel M Stein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Roula Hourani
- Department of Radiology, American University of Beirut Medical center, Beirut, Lebanon
| | | | | | - Tanvir Rizvi
- Department of Radiology and Medical Imaging, University of Virginia Health, Charlottesville, VA, USA
| | | | - Dann C Martin
- Department of Radiology and Radiologic Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Abdullah Okar
- Faculty of Medicine, Hamburg University, Hamburg, Germany
| | - Gennaro D'Anna
- Neuroimaging Unit, ASST Ovest Milanese, Legnano, Milan, Italy
| | - Ahmed Taha
- University of Manitoba, Winnipeg, Manitoba, Canada
| | - Yasaman Sharifi
- Department of Radiology, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Shahriar Faghani
- Radiology Informatics Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Dominic Kite
- Department of Radiology, University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, United Kingdom
| | - Marco Pinho
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | | | - Michelle Alonso-Basanta
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Javier Villanueva-Meyer
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | - Andreas M Rauschecker
- Center for Intelligent Imaging (ci2), Department of Radiology & Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, USA
| | - Ayman Nada
- University of Missouri, Columbia, MO, USA
| | - Mariam Aboian
- Department of Radiology, Children's Hospital of Philadelphia (CHOP), Philadelphia, PA, USA
| | - Adam Flanders
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Neurological Surgery, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Evan Calabrese
- Department of Radiology, Duke University Medical Center, Durham, NC, USA.
| |
Collapse
|
26
|
Ross J, Hammouche S, Chen Y, Rockall AG. Beyond regulatory compliance: evaluating radiology artificial intelligence applications in deployment. Clin Radiol 2024; 79:338-345. [PMID: 38360516 DOI: 10.1016/j.crad.2024.01.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/24/2024] [Accepted: 01/29/2024] [Indexed: 02/17/2024]
Abstract
The implementation of artificial intelligence (AI) applications in routine practice, following regulatory approval, is currently limited by practical concerns around reliability, accountability, trust, safety, and governance, in addition to factors such as cost-effectiveness and institutional information technology support. When a technology is new and relatively untested in a field, professional confidence is lacking and there is a sense of the need to go above the baseline level of validation and compliance. In this article, we propose an approach that goes beyond standard regulatory compliance for AI apps that are approved for marketing, including independent benchmarking in the lab as well as clinical audit in practice, with the aims of increasing trust and preventing harm.
Collapse
Affiliation(s)
- J Ross
- Department of Cancer and Surgery, Imperial College London, UK.
| | - S Hammouche
- Department of Cancer and Surgery, Imperial College London, UK
| | - Y Chen
- School of Medicine, University of Nottingham, UK
| | - A G Rockall
- Department of Cancer and Surgery, Imperial College London, UK
| |
Collapse
|
27
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
28
|
Wiltgen T, McGinnis J, Schlaeger S, Kofler F, Voon C, Berthele A, Bischl D, Grundl L, Will N, Metz M, Schinz D, Sepp D, Prucker P, Schmitz-Koep B, Zimmer C, Menze B, Rueckert D, Hemmer B, Kirschke J, Mühlau M, Wiestler B. LST-AI: A deep learning ensemble for accurate MS lesion segmentation. Neuroimage Clin 2024; 42:103611. [PMID: 38703470 PMCID: PMC11088188 DOI: 10.1016/j.nicl.2024.103611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 04/19/2024] [Accepted: 04/23/2024] [Indexed: 05/06/2024]
Abstract
Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.
Collapse
Affiliation(s)
- Tun Wiltgen
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany; TUM-Neuroimaging Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Julian McGinnis
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany; TUM-Neuroimaging Center, School of Medicine, Technical University of Munich, Munich, Germany; Department of Computer Science, Institute for AI in Medicine, Technical University of Munich, Munich, Germany
| | - Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Florian Kofler
- Department of Computer Science, Institute for AI in Medicine, Technical University of Munich, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany; TranslaTUM, Central Institute for Translational Cancer Research of the Technical University of Munich, Munich, Germany; Helmholtz AI, Helmholtz Munich, Neuherberg, Germany
| | - CuiCi Voon
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany; TUM-Neuroimaging Center, School of Medicine, Technical University of Munich, Munich, Germany
| | - Achim Berthele
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Daria Bischl
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Lioba Grundl
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Nikolaus Will
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Marie Metz
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - David Schinz
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany; Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Dominik Sepp
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Philipp Prucker
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Benita Schmitz-Koep
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Daniel Rueckert
- Department of Computer Science, Institute for AI in Medicine, Technical University of Munich, Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Bernhard Hemmer
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| | - Jan Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Mark Mühlau
- Department of Neurology, School of Medicine, Technical University of Munich, Munich, Germany; TUM-Neuroimaging Center, School of Medicine, Technical University of Munich, Munich, Germany.
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany; TranslaTUM, Central Institute for Translational Cancer Research of the Technical University of Munich, Munich, Germany; AI for Image-Guided Diagnosis and Therapy, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
29
|
Wiestler B, Bison B, Behrens L, Tüchert S, Metz M, Griessmair M, Jakob M, Schlegel PG, Binder V, von Luettichau I, Metzler M, Johann P, Hau P, Frühwald M. Human-Level Differentiation of Medulloblastoma from Pilocytic Astrocytoma: A Real-World Multicenter Pilot Study. Cancers (Basel) 2024; 16:1474. [PMID: 38672556 PMCID: PMC11048511 DOI: 10.3390/cancers16081474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/05/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
Medulloblastoma and pilocytic astrocytoma are the two most common pediatric brain tumors with overlapping imaging features. In this proof-of-concept study, we investigated using a deep learning classifier trained on a multicenter data set to differentiate these tumor types. We developed a patch-based 3D-DenseNet classifier, utilizing automated tumor segmentation. Given the heterogeneity of imaging data (and available sequences), we used all individually available preoperative imaging sequences to make the model robust to varying input. We compared the classifier to diagnostic assessments by five readers with varying experience in pediatric brain tumors. Overall, we included 195 preoperative MRIs from children with medulloblastoma (n = 69) or pilocytic astrocytoma (n = 126) across six university hospitals. In the 64-patient test set, the DenseNet classifier achieved a high AUC of 0.986, correctly predicting 62/64 (97%) diagnoses. It misclassified one case of each tumor type. Human reader accuracy ranged from 100% (expert neuroradiologist) to 80% (resident). The classifier performed significantly better than relatively inexperienced readers (p < 0.05) and was on par with pediatric neuro-oncology experts. Our proof-of-concept study demonstrates a deep learning model based on automated tumor segmentation that can reliably preoperatively differentiate between medulloblastoma and pilocytic astrocytoma, even in heterogeneous data.
Collapse
Affiliation(s)
- Benedikt Wiestler
- Department of Neuroradiology, School of Medicine and Health, Technical University of Munich, 81675 Munich, Germany (M.G.)
- TranslaTUM, Center for Translational Cancer Research, Technical University of Munich, 81675 Munich, Germany
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
| | - Brigitte Bison
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Diagnostic and Interventional Neuroradiology, Faculty of Medicine, University Hospital Augsburg, 86156 Augsburg, Germany; (B.B.); (L.B.)
- Neuroradiological Reference Center for the Pediatric Brain Tumor (HIT) Studies of the German Society of Pediatric Oncology and Hematology, Faculty of Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Lars Behrens
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Diagnostic and Interventional Neuroradiology, Faculty of Medicine, University Hospital Augsburg, 86156 Augsburg, Germany; (B.B.); (L.B.)
- Neuroradiological Reference Center for the Pediatric Brain Tumor (HIT) Studies of the German Society of Pediatric Oncology and Hematology, Faculty of Medicine, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Stefanie Tüchert
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, 86156 Augsburg, Germany
| | - Marie Metz
- Department of Neuroradiology, School of Medicine and Health, Technical University of Munich, 81675 Munich, Germany (M.G.)
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
| | - Michael Griessmair
- Department of Neuroradiology, School of Medicine and Health, Technical University of Munich, 81675 Munich, Germany (M.G.)
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
| | - Marcus Jakob
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Department of Pediatric Hematology, Oncology and Stem Cell Transplantation, University of Regensburg, 93053 Regensburg, Germany;
| | - Paul-Gerhardt Schlegel
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Department of Pediatric Hematology, Oncology and Stem Cell Transplantation, University Children’s Hospital Würzburg, 97080 Würzburg, Germany;
| | - Vera Binder
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Department of Pediatrics, Dr. Von Hauner Children’s Hospital, University Hospital, LMU Munich, 80539 Munich, Germany;
| | - Irene von Luettichau
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Division of Pediatric Hematology and Oncology, Department of Pediatrics, Kinderklinik München Schwabing, Children’s Cancer Research Center, TUM School of Medicine and Health, Technical University of Munich, 80333 Munich, Germany;
| | - Markus Metzler
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Pediatric Oncology and Hematology, Department of Pediatrics and Adolescent Medicine, University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany;
| | - Pascal Johann
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Swabian Children’s Cancer Center, Pediatrics and Adolescent Medicine, University Hospital Augsburg, 86156 Augsburg, Germany; (P.J.); (M.F.)
| | - Peter Hau
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- Department of Neurology and Wilhelm Sander-NeuroOncology Unit, University Hospital Regensburg, 93053 Regensburg, Germany;
| | - Michael Frühwald
- Study Groups on CNS Tumors Within the Bavarian Cancer Research Center (BZKF)
- KIONET, Kinderonkologisches Netzwerk Bayern
- Swabian Children’s Cancer Center, Pediatrics and Adolescent Medicine, University Hospital Augsburg, 86156 Augsburg, Germany; (P.J.); (M.F.)
| |
Collapse
|
30
|
Svanera M, Savardi M, Signoroni A, Benini S, Muckli L. Fighting the scanner effect in brain MRI segmentation with a progressive level-of-detail network trained on multi-site data. Med Image Anal 2024; 93:103090. [PMID: 38241763 DOI: 10.1016/j.media.2024.103090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.
Collapse
Affiliation(s)
- Michele Svanera
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK.
| | - Mattia Savardi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Alberto Signoroni
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Sergio Benini
- Department of Information Engineering, University of Brescia, Italy
| | - Lars Muckli
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK
| |
Collapse
|
31
|
Hoebel KV, Bridge CP, Kim A, Gerstner ER, Ly IK, Deng F, DeSalvo MN, Dietrich J, Huang R, Huang SY, Pomerantz SR, Vagvala S, Rosen BR, Kalpathy-Cramer J. Not without Context-A Multiple Methods Study on Evaluation and Correction of Automated Brain Tumor Segmentations by Experts. Acad Radiol 2024; 31:1572-1582. [PMID: 37951777 DOI: 10.1016/j.acra.2023.10.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 10/06/2023] [Accepted: 10/07/2023] [Indexed: 11/14/2023]
Abstract
RATIONALE AND OBJECTIVES Brain tumor segmentations are integral to the clinical management of patients with glioblastoma, the deadliest primary brain tumor in adults. The manual delineation of tumors is time-consuming and highly provider-dependent. These two problems must be addressed by introducing automated, deep-learning-based segmentation tools. This study aimed to identify criteria experts use to evaluate the quality of automatically generated segmentations and their thought processes as they correct them. MATERIALS AND METHODS Multiple methods were used to develop a detailed understanding of the complex factors that shape experts' perception of segmentation quality and their thought processes in correcting proposed segmentations. Data from a questionnaire and semistructured interview with neuro-oncologists and neuroradiologists were collected between August and December 2021 and analyzed using a combined deductive and inductive approach. RESULTS Brain tumors are highly complex and ambiguous segmentation targets. Therefore, physicians rely heavily on the given context related to the patient and clinical context in evaluating the quality and need to correct brain tumor segmentation. Most importantly, the intended clinical application determines the segmentation quality criteria and editing decisions. Physicians' personal beliefs and preferences about the capabilities of AI algorithms and whether questionable areas should not be included are additional criteria influencing the perception of segmentation quality and appearance of an edited segmentation. CONCLUSION Our findings on experts' perceptions of segmentation quality will allow the design of improved frameworks for expert-centered evaluation of brain tumor segmentation models. In particular, the knowledge presented here can inspire the development of brain tumor-specific metrics for segmentation model training and evaluation.
Collapse
Affiliation(s)
- Katharina V Hoebel
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Christopher P Bridge
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Albert Kim
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts
| | - Elizabeth R Gerstner
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts
| | - Ina K Ly
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts
| | - Francis Deng
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Massachusetts
| | - Matthew N DeSalvo
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Jorg Dietrich
- Department of Neurology, Division of Neuro-Oncology, Massachusetts General Hospital Cancer Center, Harvard Medical School, Boston, Massachusetts
| | - Raymond Huang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Susie Y Huang
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts
| | - Stuart R Pomerantz
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Saivenkat Vagvala
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bruce R Rosen
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts; Department of Ophthalmology, University of Colorado Anschutz Medical Campus, 1675 Aurora Court, Mail Stop F731, Aurora, CO.
| |
Collapse
|
32
|
Khalighi S, Reddy K, Midya A, Pandav KB, Madabhushi A, Abedalthagafi M. Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis Oncol 2024; 8:80. [PMID: 38553633 PMCID: PMC10980741 DOI: 10.1038/s41698-024-00575-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/13/2024] [Indexed: 04/02/2024] Open
Abstract
This review delves into the most recent advancements in applying artificial intelligence (AI) within neuro-oncology, specifically emphasizing work on gliomas, a class of brain tumors that represent a significant global health issue. AI has brought transformative innovations to brain tumor management, utilizing imaging, histopathological, and genomic tools for efficient detection, categorization, outcome prediction, and treatment planning. Assessing its influence across all facets of malignant brain tumor management- diagnosis, prognosis, and therapy- AI models outperform human evaluations in terms of accuracy and specificity. Their ability to discern molecular aspects from imaging may reduce reliance on invasive diagnostics and may accelerate the time to molecular diagnoses. The review covers AI techniques, from classical machine learning to deep learning, highlighting current applications and challenges. Promising directions for future research include multimodal data integration, generative AI, large medical language models, precise tumor delineation and characterization, and addressing racial and gender disparities. Adaptive personalized treatment strategies are also emphasized for optimizing clinical outcomes. Ethical, legal, and social implications are discussed, advocating for transparency and fairness in AI integration for neuro-oncology and providing a holistic understanding of its transformative impact on patient care.
Collapse
Affiliation(s)
- Sirvan Khalighi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kartik Reddy
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Abhishek Midya
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Krunal Balvantbhai Pandav
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Anant Madabhushi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA.
| | - Malak Abedalthagafi
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA.
- The Cell and Molecular Biology Program, Winship Cancer Institute, Atlanta, GA, USA.
| |
Collapse
|
33
|
Ansari G, Mirza-Aghazadeh-Attari M, Mosier KM, Fakhry C, Yousem DM. Radiomics Features in Predicting Human Papillomavirus Status in Oropharyngeal Squamous Cell Carcinoma: A Systematic Review, Quality Appraisal, and Meta-Analysis. Diagnostics (Basel) 2024; 14:737. [PMID: 38611650 PMCID: PMC11011663 DOI: 10.3390/diagnostics14070737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/18/2024] [Accepted: 03/20/2024] [Indexed: 04/14/2024] Open
Abstract
We sought to determine the diagnostic accuracy of radiomics features in predicting HPV status in oropharyngeal squamous cell carcinoma (SCC) compared to routine paraclinical measures used in clinical practice. Twenty-six articles were included in the systematic review, and thirteen were used for the meta-analysis. The overall sensitivity of the included studies was 0.78, the overall specificity was 0.76, and the overall area under the ROC curve was 0.84. The diagnostic odds ratio (DOR) equaled 12 (8, 17). Subgroup analysis showed no significant difference between radiomics features extracted from CT or MR images. Overall, the studies were of low quality in regard to radiomics quality score, although most had a low risk of bias based on the QUADAS-2 tool. Radiomics features showed good overall sensitivity and specificity in determining HPV status in OPSCC, though the low quality of the included studies poses problems for generalizability.
Collapse
Affiliation(s)
- Golnoosh Ansari
- Department of Radiology, Northwestern Hospital, Northwestern School of Medicine, Chicago, IL 60611, USA;
| | - Mohammad Mirza-Aghazadeh-Attari
- Division of Interventional Radiology, Department of Radiology and Radiological Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA
| | - Kristine M. Mosier
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202, USA;
| | - Carole Fakhry
- Department of Otolaryngology, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA;
| | - David M. Yousem
- Division of Neuroradiology, Department of Radiology and Radiological Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA;
| |
Collapse
|
34
|
Fatania K, Frood R, Mistry H, Short SC, O’Connor J, Scarsbrook AF, Currie S. Tumour Size and Overall Survival in a Cohort of Patients with Unifocal Glioblastoma: A Uni- and Multivariable Prognostic Modelling and Resampling Study. Cancers (Basel) 2024; 16:1301. [PMID: 38610979 PMCID: PMC11011077 DOI: 10.3390/cancers16071301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/15/2024] [Accepted: 03/20/2024] [Indexed: 04/14/2024] Open
Abstract
Published models inconsistently associate glioblastoma size with overall survival (OS). This study aimed to investigate the prognostic effect of tumour size in a large cohort of patients diagnosed with GBM and interrogate how sample size and non-linear transformations may impact on the likelihood of finding a prognostic effect. In total, 279 patients with a IDH-wildtype unifocal WHO grade 4 GBM between 2014 and 2020 from a retrospective cohort were included. Uni-/multivariable association between core volume, whole volume (CV and WV), and diameter with OS was assessed with (1) Cox proportional hazard models +/- log transformation and (2) resampling with 1,000,000 repetitions and varying sample size to identify the percentage of models, which showed a significant effect of tumour size. Models adjusted for operation type and a diameter model adjusted for all clinical variables remained significant (p = 0.03). Multivariable resampling increased the significant effects (p < 0.05) of all size variables as sample size increased. Log transformation also had a large effect on the chances of a prognostic effect of WV. For models adjusted for operation type, 19.5% of WV vs. 26.3% log-WV (n = 50) and 69.9% WV and 89.9% log-WV (n = 279) were significant. In this large well-curated cohort, multivariable modelling and resampling suggest tumour volume is prognostic at larger sample sizes and with log transformation for WV.
Collapse
Affiliation(s)
- Kavi Fatania
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds General Infirmary, Leeds LS1 3EX, UK (A.F.S.); (S.C.)
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9TJ, UK;
| | - Russell Frood
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds General Infirmary, Leeds LS1 3EX, UK (A.F.S.); (S.C.)
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9TJ, UK;
| | - Hitesh Mistry
- Division of Cancer Sciences, The University of Manchester, Manchester M13 9PL, UK; (H.M.)
| | - Susan C. Short
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9TJ, UK;
- Department of Oncology, Leeds Teaching Hospitals NHS Trust, St James’s University Hospital, Leeds LS9 7TF, UK
| | - James O’Connor
- Division of Cancer Sciences, The University of Manchester, Manchester M13 9PL, UK; (H.M.)
- Department of Radiology, The Christie Hospital, Manchester M20 4BX, UK
- Division of Radiotherapy and Imaging, Institute of Cancer Research, London SM2 5NG, UK
| | - Andrew F. Scarsbrook
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds General Infirmary, Leeds LS1 3EX, UK (A.F.S.); (S.C.)
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9TJ, UK;
| | - Stuart Currie
- Department of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds General Infirmary, Leeds LS1 3EX, UK (A.F.S.); (S.C.)
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9TJ, UK;
| |
Collapse
|
35
|
Price G, Peek N, Eleftheriou I, Spencer K, Paley L, Hogenboom J, van Soest J, Dekker A, van Herk M, Faivre-Finn C. An Overview of Real-World Data Infrastructure for Cancer Research. Clin Oncol (R Coll Radiol) 2024:S0936-6555(24)00108-0. [PMID: 38631976 DOI: 10.1016/j.clon.2024.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 02/27/2024] [Accepted: 03/13/2024] [Indexed: 04/19/2024]
Abstract
AIMS There is increasing interest in the opportunities offered by Real World Data (RWD) to provide evidence where clinical trial data does not exist, but access to appropriate data sources is frequently cited as a barrier to RWD research. This paper discusses current RWD resources and how they can be accessed for cancer research. MATERIALS AND METHODS There has been significant progress on facilitating RWD access in the last few years across a range of scales, from local hospital research databases, through regional care records and national repositories, to the impact of federated learning approaches on internationally collaborative studies. We use a series of case studies, principally from the UK, to illustrate how RWD can be accessed for research and healthcare improvement at each of these scales. RESULTS For each example we discuss infrastructure and governance requirements with the aim of encouraging further work in this space that will help to fill evidence gaps in oncology. CONCLUSION There are challenges, but real-world data research across a range of scales is already a reality. Taking advantage of the current generation of data sources requires researchers to carefully define their research question and the scale at which it would be best addressed.
Collapse
Affiliation(s)
- G Price
- Division of Cancer Sciences, University of Manchester, Manchester, UK; The Christie NHS Foundation Trust, Manchester, UK.
| | - N Peek
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, UK; The Healthcare Improvement Studies Institute (THIS Institute), University of Cambridge, Cambridge, UK
| | - I Eleftheriou
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, UK
| | - K Spencer
- Leeds Institute of Health Sciences, University of Leeds, Leeds, UK; Leeds Teaching Hospitals NHS Trust, Leeds, UK; National Disease Registration Service, NHS England, UK
| | - L Paley
- National Disease Registration Service, NHS England, UK
| | - J Hogenboom
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - J van Soest
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands; Brightlands Institute for Smart Society (BISS), Faculty of Science and Engineering, Maastricht University, Maastricht, The Netherlands
| | - A Dekker
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - M van Herk
- Division of Cancer Sciences, University of Manchester, Manchester, UK; The Christie NHS Foundation Trust, Manchester, UK
| | - C Faivre-Finn
- Division of Cancer Sciences, University of Manchester, Manchester, UK; The Christie NHS Foundation Trust, Manchester, UK
| |
Collapse
|
36
|
Danek BP, Makarious MB, Dadu A, Vitale D, Lee PS, Singleton AB, Nalls MA, Sun J, Faghri F. Federated learning for multi-omics: A performance evaluation in Parkinson's disease. PATTERNS (NEW YORK, N.Y.) 2024; 5:100945. [PMID: 38487808 PMCID: PMC10935499 DOI: 10.1016/j.patter.2024.100945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 01/29/2024] [Accepted: 02/02/2024] [Indexed: 03/17/2024]
Abstract
While machine learning (ML) research has recently grown more in popularity, its application in the omics domain is constrained by access to sufficiently large, high-quality datasets needed to train ML models. Federated learning (FL) represents an opportunity to enable collaborative curation of such datasets among participating institutions. We compare the simulated performance of several models trained using FL against classically trained ML models on the task of multi-omics Parkinson's disease prediction. We find that FL model performance tracks centrally trained ML models, where the most performant FL model achieves an AUC-PR of 0.876 ± 0.009, 0.014 ± 0.003 less than its centrally trained variation. We also determine that the dispersion of samples within a federation plays a meaningful role in model performance. Our study implements several open-source FL frameworks and aims to highlight some of the challenges and opportunities when applying these collaborative methods in multi-omics studies.
Collapse
Affiliation(s)
- Benjamin P. Danek
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- DataTecnica, Washington, DC 20037, USA
| | - Mary B. Makarious
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- UCL Movement Disorders Centre, University College London, London, UK
| | - Anant Dadu
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- DataTecnica, Washington, DC 20037, USA
| | - Dan Vitale
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- DataTecnica, Washington, DC 20037, USA
| | - Paul Suhwan Lee
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
| | - Andrew B. Singleton
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA
| | - Mike A. Nalls
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- DataTecnica, Washington, DC 20037, USA
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA
| | - Jimeng Sun
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Faraz Faghri
- Center for Alzheimer’s and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892, USA
- DataTecnica, Washington, DC 20037, USA
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
37
|
Hamghalam M, Simpson AL. Medical image synthesis via conditional GANs: Application to segmenting brain tumours. Comput Biol Med 2024; 170:107982. [PMID: 38266466 DOI: 10.1016/j.compbiomed.2024.107982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/30/2023] [Accepted: 01/13/2024] [Indexed: 01/26/2024]
Abstract
Accurate brain tumour segmentation is critical for tasks such as surgical planning, diagnosis, and analysis, with magnetic resonance imaging (MRI) being the preferred modality due to its excellent visualisation of brain tissues. However, the wide intensity range of voxel values in MR scans often results in significant overlap between the density distributions of different tumour tissues, leading to reduced contrast and segmentation accuracy. This paper introduces a novel framework based on conditional generative adversarial networks (cGANs) aimed at enhancing the contrast of tumour subregions for both voxel-wise and region-wise segmentation approaches. We present two models: Enhancement and Segmentation GAN (ESGAN), which combines classifier loss with adversarial loss to predict central labels of input patches, and Enhancement GAN (EnhGAN), which generates high-contrast synthetic images with reduced inter-class overlap. These synthetic images are then fused with corresponding modalities to emphasise meaningful tissues while suppressing weaker ones. We also introduce a novel generator that adaptively calibrates voxel values within input patches, leveraging fully convolutional networks. Both models employ a multi-scale Markovian network as a GAN discriminator to capture local patch statistics and estimate the distribution of MR images in complex contexts. Experimental results on publicly available MR brain tumour datasets demonstrate the competitive accuracy of our models compared to current brain tumour segmentation techniques.
Collapse
Affiliation(s)
- Mohammad Hamghalam
- School of Computing, Queen's University, Kingston, ON, Canada; Department of Electrical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran.
| | - Amber L Simpson
- School of Computing, Queen's University, Kingston, ON, Canada; Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.
| |
Collapse
|
38
|
Al-Sahab B, Leviton A, Loddenkemper T, Paneth N, Zhang B. Biases in Electronic Health Records Data for Generating Real-World Evidence: An Overview. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2024; 8:121-139. [PMID: 38273982 PMCID: PMC10805748 DOI: 10.1007/s41666-023-00153-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/05/2023] [Accepted: 11/07/2023] [Indexed: 01/27/2024]
Abstract
Electronic Health Records (EHR) are increasingly being perceived as a unique source of data for clinical research as they provide unprecedentedly large volumes of real-time data from real-world settings. In this review of the secondary uses of EHR, we identify the anticipated breadth of opportunities, pointing out the data deficiencies and potential biases that are likely to limit the search for true causal relationships. This paper provides a comprehensive overview of the types of biases that arise along the pathways that generate real-world evidence and the sources of these biases. We distinguish between two levels in the production of EHR data where biases are likely to arise: (i) at the healthcare system level, where the principal source of bias resides in access to, and provision of, medical care, and in the acquisition and documentation of medical and administrative data; and (ii) at the research level, where biases arise from the processes of extracting, analyzing, and interpreting these data. Due to the plethora of biases, mainly in the form of selection and information bias, we conclude with advising extreme caution about making causal inferences based on secondary uses of EHRs.
Collapse
Affiliation(s)
- Ban Al-Sahab
- Department of Family Medicine, College of Human Medicine, Michigan State University, B100 Clinical Center, 788 Service Road, East Lansing, MI USA
| | - Alan Leviton
- Department of Neurology, Harvard Medical School, Boston, MA USA
- Department of Neurology, Boston Children’s Hospital, Boston, MA USA
| | - Tobias Loddenkemper
- Department of Neurology, Harvard Medical School, Boston, MA USA
- Department of Neurology, Boston Children’s Hospital, Boston, MA USA
| | - Nigel Paneth
- Department of Epidemiology and Biostatistics, College of Human Medicine, Michigan State University, East Lansing, MI USA
- Department of Pediatrics and Human Development, College of Human Medicine, Michigan State University, East Lansing, MI USA
| | - Bo Zhang
- Department of Neurology, Boston Children’s Hospital, Boston, MA USA
- Biostatistics and Research Design, Institutional Centers of Clinical and Translational Research, Boston Children’s Hospital, Boston, MA USA
- Harvard Medical School, Boston, MA USA
| |
Collapse
|
39
|
Danek B, Makarious MB, Dadu A, Vitale D, Lee PS, Nalls MA, Sun J, Faghri F. Federated Learning for multi-omics: a performance evaluation in Parkinson's disease. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.04.560604. [PMID: 37986893 PMCID: PMC10659429 DOI: 10.1101/2023.10.04.560604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
While machine learning (ML) research has recently grown more in popularity, its application in the omics domain is constrained by access to sufficiently large, high-quality datasets needed to train ML models. Federated Learning (FL) represents an opportunity to enable collaborative curation of such datasets among participating institutions. We compare the simulated performance of several models trained using FL against classically trained ML models on the task of multi-omics Parkinson's Disease prediction. We find that FL model performance tracks centrally trained ML models, where the most performant FL model achieves an AUC-PR of 0.876 ± 0.009, 0.014 ± 0.003 less than its centrally trained variation. We also determine that the dispersion of samples within a federation plays a meaningful role in model performance. Our study implements several open source FL frameworks and aims to highlight some of the challenges and opportunities when applying these collaborative methods in multi-omics studies.
Collapse
Affiliation(s)
- Benjamin Danek
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, 61820, USA
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
- DataTecnica, Washington, DC, 20037, USA
| | - Mary B. Makarious
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD, 20892, USA
- Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, London, UK
- UCL Movement Disorders Centre, University College London, London, UK
| | - Anant Dadu
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
- DataTecnica, Washington, DC, 20037, USA
| | - Dan Vitale
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
- DataTecnica, Washington, DC, 20037, USA
| | - Paul Suhwan Lee
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Mike A Nalls
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
- DataTecnica, Washington, DC, 20037, USA
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Jimeng Sun
- Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, 61820, USA
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL, 61820, USA
| | - Faraz Faghri
- Center for Alzheimer's and Related Dementias (CARD), National Institute on Aging and National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, 20892, USA
- DataTecnica, Washington, DC, 20037, USA
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD, 20892, USA
| |
Collapse
|
40
|
Soltan AAS, Thakur A, Yang J, Chauhan A, D'Cruz LG, Dickson P, Soltan MA, Thickett DR, Eyre DW, Zhu T, Clifton DA. A scalable federated learning solution for secondary care using low-cost microcomputing: privacy-preserving development and evaluation of a COVID-19 screening test in UK hospitals. Lancet Digit Health 2024; 6:e93-e104. [PMID: 38278619 DOI: 10.1016/s2589-7500(23)00226-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 10/17/2023] [Accepted: 10/30/2023] [Indexed: 01/28/2024]
Abstract
BACKGROUND Multicentre training could reduce biases in medical artificial intelligence (AI); however, ethical, legal, and technical considerations can constrain the ability of hospitals to share data. Federated learning enables institutions to participate in algorithm development while retaining custody of their data but uptake in hospitals has been limited, possibly as deployment requires specialist software and technical expertise at each site. We previously developed an artificial intelligence-driven screening test for COVID-19 in emergency departments, known as CURIAL-Lab, which uses vital signs and blood tests that are routinely available within 1 h of a patient's arrival. Here we aimed to federate our COVID-19 screening test by developing an easy-to-use embedded system-which we introduce as full-stack federated learning-to train and evaluate machine learning models across four UK hospital groups without centralising patient data. METHODS We supplied a Raspberry Pi 4 Model B preloaded with our federated learning software pipeline to four National Health Service (NHS) hospital groups in the UK: Oxford University Hospitals NHS Foundation Trust (OUH; through the locally linked research University, University of Oxford), University Hospitals Birmingham NHS Foundation Trust (UHB), Bedfordshire Hospitals NHS Foundation Trust (BH), and Portsmouth Hospitals University NHS Trust (PUH). OUH, PUH, and UHB participated in federated training, training a deep neural network and logistic regressor over 150 rounds to form and calibrate a global model to predict COVID-19 status, using clinical data from patients admitted before the pandemic (COVID-19-negative) and testing positive for COVID-19 during the first wave of the pandemic. We conducted a federated evaluation of the global model for admissions during the second wave of the pandemic at OUH, PUH, and externally at BH. For OUH and PUH, we additionally performed local fine-tuning of the global model using the sites' individual training data, forming a site-tuned model, and evaluated the resultant model for admissions during the second wave of the pandemic. This study included data collected between Dec 1, 2018, and March 1, 2021; the exact date ranges used varied by site. The primary outcome was overall model performance, measured as the area under the receiver operating characteristic curve (AUROC). Removable micro secure digital (microSD) storage was destroyed on study completion. FINDINGS Clinical data from 130 941 patients (1772 COVID-19-positive), routinely collected across three hospital groups (OUH, PUH, and UHB), were included in federated training. The evaluation step included data from 32 986 patients (3549 COVID-19-positive) attending OUH, PUH, or BH during the second wave of the pandemic. Federated training of a global deep neural network classifier improved upon performance of models trained locally in terms of AUROC by a mean of 27·6% (SD 2·2): AUROC increased from 0·574 (95% CI 0·560-0·589) at OUH and 0·622 (0·608-0·637) at PUH using the locally trained models to 0·872 (0·862-0·882) at OUH and 0·876 (0·865-0·886) at PUH using the federated global model. Performance improvement was smaller for a logistic regression model, with a mean increase in AUROC of 13·9% (0·5%). During federated external evaluation at BH, AUROC for the global deep neural network model was 0·917 (0·893-0·942), with 89·7% sensitivity (83·6-93·6) and 76·6% specificity (73·9-79·1). Site-specific tuning of the global model did not significantly improve performance (change in AUROC <0·01). INTERPRETATION We developed an embedded system for federated learning, using microcomputing to optimise for ease of deployment. We deployed full-stack federated learning across four UK hospital groups to develop a COVID-19 screening test without centralising patient data. Federation improved model performance, and the resultant global models were generalisable. Full-stack federated learning could enable hospitals to contribute to AI development at low cost and without specialist technical expertise at each site. FUNDING The Wellcome Trust, University of Oxford Medical and Life Sciences Translational Fund.
Collapse
Affiliation(s)
- Andrew A S Soltan
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK; Department of Oncology, University of Oxford, Oxford, UK; Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Nuffield Department of Population Health, University of Oxford, Oxford, UK; Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK.
| | - Anshul Thakur
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Jenny Yang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Anoop Chauhan
- Portsmouth Hospitals University NHS Trust, Portsmouth, UK
| | - Leon G D'Cruz
- Portsmouth Hospitals University NHS Trust, Portsmouth, UK
| | | | - Marina A Soltan
- The Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - David R Thickett
- The Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - David W Eyre
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK; Big Data Institute, Nuffield Department of Population Health, University of Oxford, Oxford, UK; NIHR Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance, University of Oxford and Public Health England, Oxford, UK; NIHR Oxford Biomedical Research Centre, Oxford, UK
| | - Tingting Zhu
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - David A Clifton
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; NIHR Oxford Biomedical Research Centre, Oxford, UK; Oxford-Suzhou Centre for Advanced Research, Suzhou, China
| |
Collapse
|
41
|
Kawahara D, Murakami Y, Awane S, Emoto Y, Iwashita K, Kubota H, Sasaki R, Nagata Y. Radiomics and dosiomics for predicting complete response to definitive chemoradiotherapy patients with oesophageal squamous cell cancer using the hybrid institution model. Eur Radiol 2024; 34:1200-1209. [PMID: 37589902 DOI: 10.1007/s00330-023-10020-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/08/2023] [Accepted: 06/12/2023] [Indexed: 08/18/2023]
Abstract
OBJECTIVES To develop a multi-institutional prediction model to estimate the local response to oesophageal squamous cell carcinoma (ESCC) treated with definitive radiotherapy based on radiomics and dosiomics features. METHODS The local responses were categorised into two groups (incomplete and complete). An external validation model and a hybrid model that the patients from two institutions were mixed randomly were proposed. The ESCC patients at stages I-IV who underwent chemoradiotherapy from 2012 to 2017 and had follow-up duration of more than 5 years were included. The patients who received palliative or pre-operable radiotherapy and had no FDG PET images were excluded. The segmentations included the GTV, CTV, and PTV which are used in treatment planning. In addition, shrinkage, expansion, and shell regions were created. Radiomic and dosiomic features were extracted from CT, FDG PET images, and dose distribution. Machine learning-based prediction models were developed using decision tree, support vector machine, k-nearest neighbour (kNN) algorithm, and neural network (NN) classifiers. RESULTS A total of 116 and 26 patients enrolled at Centre 1 and Centre 2, respectively. The external validation model exhibited the highest accuracy with 65.4% for CT-based radiomics, 77.9% for PET-based radiomics, and 72.1% for dosiomics based on the NN classifiers. The hybrid model exhibited the highest accuracy of 84.4% for CT-based radiomics based on the kNN classifier, 86.0% for PET-based radiomics, and 79.0% for dosiomics based on the NN classifiers. CONCLUSION The proposed hybrid model exhibited promising predictive performance for the local response to definitive radiotherapy in ESCC patients. CLINICAL RELEVANCE STATEMENT The prediction of the complete response for oesophageal cancer patients may contribute to improving overall survival. The hybrid model has the potential to improve prediction performance than the external validation model that was conventionally proposed. KEY POINTS • Radiomics and dosiomics used to predict response in patients with oesophageal cancer receiving definitive radiotherapy. • Hybrid model with neural network classifier of PET-based radiomics improved prediction accuracy by 8.1%. • The hybrid model has the potential to improve prediction performance.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Yuji Murakami
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Shota Awane
- School of Medicine, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yuki Emoto
- Department of Radiation Oncology, Hyogo Cancer Center, 70, Kitaoji-Cho 13, Akashi-Shi, Hyogo, Japan
| | - Kazuma Iwashita
- Division of Radiation Oncology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuouku, Kobe, Hyogo, 650-0017, Japan
| | - Hikaru Kubota
- Division of Radiation Oncology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuouku, Kobe, Hyogo, 650-0017, Japan
| | - Ryohei Sasaki
- Division of Radiation Oncology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuouku, Kobe, Hyogo, 650-0017, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
42
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
43
|
Choi G, Cha WC, Lee SU, Shin SY. Survey of Medical Applications of Federated Learning. Healthc Inform Res 2024; 30:3-15. [PMID: 38359845 PMCID: PMC10879826 DOI: 10.4258/hir.2024.30.1.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/17/2024] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has recently attracted considerable attention. However, training medical AI models is challenging due to privacy-protection regulations. Among the proposed solutions, federated learning (FL) stands out. FL involves transmitting only model parameters without sharing the original data, making it particularly suitable for the medical field, where data privacy is paramount. This study reviews the application of FL in the medical domain. METHODS We conducted a literature search using the keywords "federated learning" in combination with "medical," "healthcare," or "clinical" on Google Scholar and PubMed. After reviewing titles and abstracts, 58 papers were selected for analysis. These FL studies were categorized based on the types of data used, the target disease, the use of open datasets, the local model of FL, and the neural network model. We also examined issues related to heterogeneity and security. RESULTS In the investigated FL studies, the most commonly used data type was image data, and the most studied target diseases were cancer and COVID-19. The majority of studies utilized open datasets. Furthermore, 72% of the FL articles addressed heterogeneity issues, while 50% discussed security concerns. CONCLUSIONS FL in the medical domain appears to be in its early stages, with most research using open data and focusing on specific data types and diseases for performance verification purposes. Nonetheless, medical FL research is anticipated to be increasingly applied and to become a vital component of multi-institutional research.
Collapse
Affiliation(s)
- Geunho Choi
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul,
Korea
| | - Won Chul Cha
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul,
Korea
- Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul,
Korea
| | - Se Uk Lee
- Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul,
Korea
| | - Soo-Yong Shin
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul,
Korea
| |
Collapse
|
44
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
45
|
Wilson JR, Prevedello LM, Witiw CD, Flanders AE, Colak E. Data Liberation and Crowdsourcing in Medical Research: The Intersection of Collective and Artificial Intelligence. Radiol Artif Intell 2024; 6:e230006. [PMID: 38231037 PMCID: PMC10831522 DOI: 10.1148/ryai.230006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 11/08/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024]
Abstract
In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. Keywords: Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.
Collapse
Affiliation(s)
- Jefferson R. Wilson
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Luciano M. Prevedello
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Christopher D. Witiw
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Adam E. Flanders
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| | - Errol Colak
- From the Division of Neurosurgery (J.R.W., C.D.W.) and Department of
Medical Imaging (E.C.), St Michael’s Hospital, 30 Bond St, Toronto, ON,
Canada M5B 1W8; Department of Surgery (J.R.W., C.D.W.) and Department of Medical
Imaging (E.C.), University of Toronto, Toronto, Canada (J.R.W., C.D.W.);
Department of Radiology, The Ohio State University Wexner Medical Center,
Columbus, Ohio (L.M.P.); and Department of Radiology, Thomas Jefferson
University, Philadelphia, Pa (A.E.F.)
| |
Collapse
|
46
|
Huang B, Hu S, Liu Z, Lin CL, Su J, Zhao C, Wang L, Wang W. Challenges and prospects of visual contactless physiological monitoring in clinical study. NPJ Digit Med 2023; 6:231. [PMID: 38097771 PMCID: PMC10721846 DOI: 10.1038/s41746-023-00973-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 11/21/2023] [Indexed: 12/17/2023] Open
Abstract
The monitoring of physiological parameters is a crucial topic in promoting human health and an indispensable approach for assessing physiological status and diagnosing diseases. Particularly, it holds significant value for patients who require long-term monitoring or with underlying cardiovascular disease. To this end, Visual Contactless Physiological Monitoring (VCPM) is capable of using videos recorded by a consumer camera to monitor blood volume pulse (BVP) signal, heart rate (HR), respiratory rate (RR), oxygen saturation (SpO2) and blood pressure (BP). Recently, deep learning-based pipelines have attracted numerous scholars and achieved unprecedented development. Although VCPM is still an emerging digital medical technology and presents many challenges and opportunities, it has the potential to revolutionize clinical medicine, digital health, telemedicine as well as other areas. The VCPM technology presents a viable solution that can be integrated into these systems for measuring vital parameters during video consultation, owing to its merits of contactless measurement, cost-effectiveness, user-friendly passive monitoring and the sole requirement of an off-the-shelf camera. In fact, the studies of VCPM technologies have been rocketing recently, particularly AI-based approaches, but few are employed in clinical settings. Here we provide a comprehensive overview of the applications, challenges, and prospects of VCPM from the perspective of clinical settings and AI technologies for the first time. The thorough exploration and analysis of clinical scenarios will provide profound guidance for the research and development of VCPM technologies in clinical settings.
Collapse
Affiliation(s)
- Bin Huang
- AI Research Center, Hangzhou Innovation Institute, Beihang University, 99 Juhang Rd., Binjiang Dist., Hangzhou, Zhejiang, China.
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China.
| | - Shen Hu
- Department of Obstetrics, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Department of Epidemiology, The Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Zimeng Liu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Chun-Liang Lin
- College of Electrical Engineering and Computer Science, National Chung Hsing University, 145 Xingda Rd., South Dist., Taichung, Taiwan.
| | - Junfeng Su
- Department of General Intensive Care Unit, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Key Laboratory of Early Warning and Intervention of Multiple Organ Failure, China National Ministry of Education, Hangzhou, Zhejiang, China
| | - Changchen Zhao
- AI Research Center, Hangzhou Innovation Institute, Beihang University, 99 Juhang Rd., Binjiang Dist., Hangzhou, Zhejiang, China
| | - Li Wang
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Wenjin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, 1088 Xueyuan Ave, Nanshan Dist., Shenzhen, Guangdong, China.
| |
Collapse
|
47
|
Pagallo U, O’Sullivan S, Nevejans N, Holzinger A, Friebe M, Jeanquartier F, Jean-Quartier C, Miernik A. The underuse of AI in the health sector: Opportunity costs, success stories, risks and recommendations. HEALTH AND TECHNOLOGY 2023; 14:1-14. [PMID: 38229886 PMCID: PMC10788319 DOI: 10.1007/s12553-023-00806-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 11/16/2023] [Indexed: 01/18/2024]
Abstract
Purpose This contribution explores the underuse of artificial intelligence (AI) in the health sector, what this means for practice, and how much the underuse can cost. Attention is drawn to the relevance of an issue that the European Parliament has outlined as a "major threat" in 2020. At its heart is the risk that research and development on trusted AI systems for medicine and digital health will pile up in lab centers without generating further practical relevance. Our analysis highlights why researchers, practitioners and especially policymakers, should pay attention to this phenomenon. Methods The paper examines the ways in which governments and public agencies are addressing the underuse of AI. As governments and international organizations often acknowledge the limitations of their own initiatives, the contribution explores the causes of the current issues and suggests ways to improve initiatives for digital health. Results Recommendations address the development of standards, models of regulatory governance, assessment of the opportunity costs of underuse of technology, and the urgency of the problem. Conclusions The exponential pace of AI advances and innovations makes the risks of underuse of AI increasingly threatening. Graphical Abstract
Collapse
Affiliation(s)
- Ugo Pagallo
- Law School, University of Turin, Turin, Italy
| | - Shane O’Sullivan
- Department of Urology, Faculty of Medicine, University of Freiburg - Medical Centre, Freiburg im Breisgau, Germany
| | - Nathalie Nevejans
- Ethics and Procedures Center (CDEP), Faculty of Law of Douai, University of Artois, Arras, France
| | - Andreas Holzinger
- Human-Centered AI Lab, Medical University of Graz, Graz, Austria
- University of Natural Resources and Life Sciences Vienna, Human-Centered AI Lab, Vienna, Austria
| | - Michael Friebe
- Department of Measurements and Electronics, AGH University of Science and Technology, Krak’ow, Poland
- Faculty of Medicine, Otto-von-Guericke-University, Magdeburg, Germany
- Center for Innovation and Business Development, FOM University of Applied Sciences, Essen, Germany
| | | | | | - Arkadiusz Miernik
- Department of Urology, Faculty of Medicine, University of Freiburg - Medical Centre, Freiburg im Breisgau, Germany
| |
Collapse
|
48
|
Tao S, Liu H, Sun C, Ji H, Ji G, Han Z, Gao R, Ma J, Ma R, Chen Y, Fu S, Wang Y, Sun Y, Rong Y, Zhang X, Zhou G, Sun H. Collaborative and privacy-preserving retired battery sorting for profitable direct recycling via federated machine learning. Nat Commun 2023; 14:8032. [PMID: 38052823 DOI: 10.1038/s41467-023-43883-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/22/2023] [Indexed: 12/07/2023] Open
Abstract
Unsorted retired batteries with varied cathode materials hinder the adoption of direct recycling due to their cathode-specific nature. The surge in retired batteries necessitates precise sorting for effective direct recycling, but challenges arise from varying operational histories, diverse manufacturers, and data privacy concerns of recycling collaborators (data owners). Here we show, from a unique dataset of 130 lithium-ion batteries spanning 5 cathode materials and 7 manufacturers, a federated machine learning approach can classify these retired batteries without relying on past operational data, safeguarding the data privacy of recycling collaborators. By utilizing the features extracted from the end-of-life charge-discharge cycle, our model exhibits 1% and 3% cathode sorting errors under homogeneous and heterogeneous battery recycling settings respectively, attributed to our innovative Wasserstein-distance voting strategy. Economically, the proposed method underscores the value of precise battery sorting for a prosperous and sustainable recycling industry. This study heralds a new paradigm of using privacy-sensitive data from diverse sources, facilitating collaborative and privacy-respecting decision-making for distributed systems.
Collapse
Affiliation(s)
- Shengyu Tao
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Haizhou Liu
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Chongbo Sun
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Haocheng Ji
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Guanjun Ji
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Zhiyuan Han
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Runhua Gao
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Jun Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Ruifei Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Yuou Chen
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Shiyi Fu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yu Wang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yaojie Sun
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yu Rong
- Tencent AI Lab, Tencent, Shenzhen, China
| | - Xuan Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
| | - Guangmin Zhou
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
| | - Hongbin Sun
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
- Department of Electrical Engineering, Tsinghua University, Beijing, China.
- College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan, China.
| |
Collapse
|
49
|
Huang CT, Wang TJ, Kuo LK, Tsai MJ, Cia CT, Chiang DH, Chang PJ, Chong IW, Tsai YS, Chu YC, Liu CJ, Chen CH, Pai KC, Wu CL. Federated machine learning for predicting acute kidney injury in critically ill patients: a multicenter study in Taiwan. Health Inf Sci Syst 2023; 11:48. [PMID: 37822805 PMCID: PMC10562351 DOI: 10.1007/s13755-023-00248-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 09/20/2023] [Indexed: 10/13/2023] Open
Abstract
Purpose To address the contentious data sharing across hospitals, this study adopted a novel approach, federated learning (FL), to establish an aggregate model for acute kidney injury (AKI) prediction in critically ill patients in Taiwan. Methods This study used data from the Critical Care Database of Taichung Veterans General Hospital (TCVGH) from 2015 to 2020 and electrical medical records of the intensive care units (ICUs) between 2018 and 2020 of four referral centers in different areas across Taiwan. AKI prediction models were trained and validated thereupon. An FL-based prediction model across hospitals was then established. Results The study included 16,732 ICU admissions from the TCVGH and 38,424 ICU admissions from the other four hospitals. The complete model with 60 features and the parsimonious model with 21 features demonstrated comparable accuracies using extreme gradient boosting, neural network (NN), and random forest, with an area under the receiver-operating characteristic (AUROC) curve of approximately 0.90. The Shapley Additive Explanations plot demonstrated that the selected features were the key clinical components of AKI for critically ill patients. The AUROC curve of the established parsimonious model for external validation at the four hospitals ranged from 0.760 to 0.865. NN-based FL slightly improved the model performance at the four centers. Conclusion A reliable prediction model for AKI in ICU patients was developed with a lead time of 24 h, and it performed better when the novel FL platform across hospitals was implemented. Supplementary Information The online version contains supplementary material available at 10.1007/s13755-023-00248-5.
Collapse
Affiliation(s)
- Chun-Te Huang
- Institute of Emergency and Critical Care Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- Nephrology and Critical Care Medicine, Department of Internal Medicine and Critical Care Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Tsai-Jung Wang
- Nephrology and Critical Care Medicine, Department of Internal Medicine and Critical Care Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Li-Kuo Kuo
- Department of Critical Care Medicine, MacKay Memorial Hospital, Taipei, Taiwan
| | - Ming-Ju Tsai
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, School of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Cong-Tat Cia
- Division of Critical Care Medicine, Department of Internal Medicine, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Dung-Hung Chiang
- Department of Critical Care Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Po-Jen Chang
- Department of Information Technology, MacKay Memorial Hospital, Taipei, Taiwan
| | - Inn-Wen Chong
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, School of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Yi-Shan Tsai
- Department of Diagnostic Radiology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Yuan-Chia Chu
- Department of Information Technology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chia-Jen Liu
- Institute of Emergency and Critical Care Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Cheng-Hsu Chen
- Division of Nephrology, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Kai-Chih Pai
- College of Engineering, Tunghai University, Taichung, Taiwan
| | - Chieh-Liang Wu
- College of Medicine, National Chung Hshin University, Taichung, Taiwan
| |
Collapse
|
50
|
Sandhu SS, Gorji HT, Tavakolian P, Tavakolian K, Akhbardeh A. Medical Imaging Applications of Federated Learning. Diagnostics (Basel) 2023; 13:3140. [PMID: 37835883 PMCID: PMC10572559 DOI: 10.3390/diagnostics13193140] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/03/2023] [Accepted: 10/03/2023] [Indexed: 10/15/2023] Open
Abstract
Since its introduction in 2016, researchers have applied the idea of Federated Learning (FL) to several domains ranging from edge computing to banking. The technique's inherent security benefits, privacy-preserving capabilities, ease of scalability, and ability to transcend data biases have motivated researchers to use this tool on healthcare datasets. While several reviews exist detailing FL and its applications, this review focuses solely on the different applications of FL to medical imaging datasets, grouping applications by diseases, modality, and/or part of the body. This Systematic Literature review was conducted by querying and consolidating results from ArXiv, IEEE Xplorer, and PubMed. Furthermore, we provide a detailed description of FL architecture, models, descriptions of the performance achieved by FL models, and how results compare with traditional Machine Learning (ML) models. Additionally, we discuss the security benefits, highlighting two primary forms of privacy-preserving techniques, including homomorphic encryption and differential privacy. Finally, we provide some background information and context regarding where the contributions lie. The background information is organized into the following categories: architecture/setup type, data-related topics, security, and learning types. While progress has been made within the field of FL and medical imaging, much room for improvement and understanding remains, with an emphasis on security and data issues remaining the primary concerns for researchers. Therefore, improvements are constantly pushing the field forward. Finally, we highlighted the challenges in deploying FL in medical imaging applications and provided recommendations for future directions.
Collapse
Affiliation(s)
- Sukhveer Singh Sandhu
- Biomedical Engineering Program, University of North Dakota, Grand Forks, ND 58202, USA; (H.T.G.); (P.T.)
| | - Hamed Taheri Gorji
- Biomedical Engineering Program, University of North Dakota, Grand Forks, ND 58202, USA; (H.T.G.); (P.T.)
- SafetySpect Inc., 4200 James Ray Dr., Grand Forks, ND 58202, USA
| | - Pantea Tavakolian
- Biomedical Engineering Program, University of North Dakota, Grand Forks, ND 58202, USA; (H.T.G.); (P.T.)
| | - Kouhyar Tavakolian
- Biomedical Engineering Program, University of North Dakota, Grand Forks, ND 58202, USA; (H.T.G.); (P.T.)
| | | |
Collapse
|