1
|
Matos J, Gallifant J, Chowdhury A, Economou-Zavlanos N, Charpignon ML, Gichoya J, Celi LA, Nazer L, King H, Wong AKI. A Clinician's Guide to Understanding Bias in Critical Clinical Prediction Models. Crit Care Clin 2024; 40:827-857. [PMID: 39218488 DOI: 10.1016/j.ccc.2024.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
This narrative review focuses on the role of clinical prediction models in supporting informed decision-making in critical care, emphasizing their 2 forms: traditional scores and artificial intelligence (AI)-based models. Acknowledging the potential for both types to embed biases, the authors underscore the importance of critical appraisal to increase our trust in models. The authors outline recommendations and critical care examples to manage risk of bias in AI models. The authors advocate for enhanced interdisciplinary training for clinicians, who are encouraged to explore various resources (books, journals, news Web sites, and social media) and events (Datathons) to deepen their understanding of risk of bias.
Collapse
Affiliation(s)
- João Matos
- University of Porto (FEUP), Porto, Portugal; Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal; Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jack Gallifant
- Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Critical Care, Guy's and St Thomas' NHS Trust, London, UK
| | - Anand Chowdhury
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Medicine, Duke University, Durham, NC, USA
| | | | - Marie-Laure Charpignon
- Institute for Data Systems and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Judy Gichoya
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Lama Nazer
- Department of Pharmacy, King Hussein Cancer Center, Amman, Jordan
| | - Heather King
- Durham VA Health Care System, Health Services Research and Development, Center of Innovation to Accelerate Discovery and Practice Transformation (ADAPT), Durham, NC, USA; Department of Population Health Sciences, Duke University, Durham, NC, USA; Division of General Internal Medicine, Duke University, Duke University School of Medicine, Durham, NC, USA
| | - An-Kwok Ian Wong
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Medicine, Duke University, Durham, NC, USA; Department of Biostatistics and Bioinformatics, Duke University, Division of Translational Biomedical Informatics, Durham, NC, USA.
| |
Collapse
|
2
|
Singh JP, Krahn A, Deering TF. Preserving humanism in a digitally transforming world. Heart Rhythm 2024; 21:e265-e267. [PMID: 39207354 DOI: 10.1016/j.hrthm.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Affiliation(s)
- Jagmeet P Singh
- Division of Cardiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Andrew Krahn
- Division of Cardiology, Heart Rhythm Services, University of British Columbia, Vancouver, British Columbia, Canada
| | - Thomas F Deering
- Arrhythmia Center, Piedmont Heart Institute, Piedmont Healthcare, Atlanta, Georgia
| |
Collapse
|
3
|
Gaudin R, Otto W, Ghanad I, Kewenig S, Rendenbach C, Alevizakos V, Grün P, Kofler F, Heiland M, von See C. Enhanced Osteoporosis Detection Using Artificial Intelligence: A Deep Learning Approach to Panoramic Radiographs with an Emphasis on the Mental Foramen. Med Sci (Basel) 2024; 12:49. [PMID: 39311162 PMCID: PMC11417815 DOI: 10.3390/medsci12030049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 09/01/2024] [Accepted: 09/18/2024] [Indexed: 09/26/2024] Open
Abstract
Osteoporosis, a skeletal disorder, is expected to affect 60% of women aged over 50 years. Dual-energy X-ray absorptiometry (DXA) scans, the current gold standard, are typically used post-fracture, highlighting the need for early detection tools. Panoramic radiographs (PRs), common in annual dental evaluations, have been explored for osteoporosis detection using deep learning, but methodological flaws have cast doubt on otherwise optimistic results. This study aims to develop a robust artificial intelligence (AI) application for accurate osteoporosis identification in PRs, contributing to early and reliable diagnostics. A total of 250 PRs from three groups (A: osteoporosis group, B: non-osteoporosis group matching A in age and gender, C: non-osteoporosis group differing from A in age and gender) were cropped to the mental foramen region. A pretrained convolutional neural network (CNN) classifier was used for training, testing, and validation with a random split of the dataset into subsets (A vs. B, A vs. C). Detection accuracy and area under the curve (AUC) were calculated. The method achieved an F1 score of 0.74 and an AUC of 0.8401 (A vs. B). For young patients (A vs. C), it performed with 98% accuracy and an AUC of 0.9812. This study presents a proof-of-concept algorithm, demonstrating the potential of deep learning to identify osteoporosis in dental radiographs. It also highlights the importance of methodological rigor, as not all optimistic results are credible.
Collapse
Affiliation(s)
- Robert Gaudin
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
- Berlin Institute of Health, Charité—Universitätsmedizin Berlin, 10117 Berlin, Germany
| | - Wolfram Otto
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
| | - Iman Ghanad
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
| | - Stephan Kewenig
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
| | - Carsten Rendenbach
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
| | - Vasilios Alevizakos
- Center for Digital Technologies in Dentistry and CAD/CAM, Danube Private University, 3500 Krems an der Donau, Austria;
| | - Pascal Grün
- Center for Oral and Maxillofacial Surgery, Faculty of Medicine/Dental Medicine, Danube Private University, 3500 Krems an der Donau, Austria;
| | - Florian Kofler
- Helmholtz AI, Helmholtz Zentrum München, Ingostaedter Landstrasse 1, 85764 Oberschleissheim, Germany;
- TUM—Neuroimaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Max Heiland
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Oral and Maxillofacial Surgery, Augustenburger Platz 1, 13353 Berlin, Germany; (R.G.); (W.O.); (I.G.); (S.K.); (C.R.); (M.H.)
| | - Constantin von See
- Center for Digital Technologies in Dentistry and CAD/CAM, Danube Private University, 3500 Krems an der Donau, Austria;
| |
Collapse
|
4
|
Lavik E, Minasian L. Bioconjugates for Cancer Prevention: Opportunities for Impact. Bioconjug Chem 2024; 35:1148-1153. [PMID: 39116257 DOI: 10.1021/acs.bioconjchem.4c00283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/10/2024]
Abstract
Cancer prevention encompasses both screening strategies to find cancers early when they are likely to be most treatable and prevention and interception strategies to reduce the risk of developing cancers. Bioconjugates, here defined broadly as materials and molecules that have synthetic and biological components, have roles to play across the cancer-prevention spectrum. In particular, bioconjugates may be developed as affordable, accessible, and effective screening strategies or as novel vaccines and drugs to reduce one's risk of developing cancers. Developmental programs are available for taking novel technologies and evaluating them for clinical use in cancer screening and prevention. While a variety of different challenges exist in implementing cancer-prevention interventions, a thoughtful approach to bioconjugates could improve the delivery and acceptability of the interventions.
Collapse
Affiliation(s)
- Erin Lavik
- Division of Cancer Prevention, National Cancer Institute, 9609 Medical Center Dr, Rockville, Maryland 20850, United States
| | - Lori Minasian
- Division of Cancer Prevention, National Cancer Institute, 9609 Medical Center Dr, Rockville, Maryland 20850, United States
| |
Collapse
|
5
|
Hao S, Matos J, Dempsey K, Alwakeel M, Houghtaling J, Hong C, Gichoya J, Kibbe W, Pencina M, Cox CE, Ian Wong A. ENCoDE - a skin tone and clinical dataset from a prospective trial on acute care patients. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.08.07.24311623. [PMID: 39211868 PMCID: PMC11361235 DOI: 10.1101/2024.08.07.24311623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Background Although hypothesized to be the root cause of the pulse oximetry disparities, skin tone and its use for improving medical therapies have yet to be extensively studied. Studies previously used self-reported race as a proxy variable for skin tone. However, this approach cannot account for skin tone variability within race groups and also risks the potential to be confounded by other non-biological factors when modeling data. Therefore, to better evaluate health disparities associated with pulse oximetry, this study aimed to create a unique baseline dataset that included skin tone and electronic health record (EHR) data. Methods Patients admitted to Duke University Hospital were eligible if they had at least one pulse oximetry value recorded within 5 minutes before an arterial blood gas (ABG) value. We collected skin tone data at 16 different body locations using multiple devices, including administered visual scales, colorimetric, spectrophotometric, and photography via mobile phone cameras. All patients' data were linked in Duke's Protected Analytics Computational Environment (PACE), converted into a common data model, and then de-identified before publication in PhysioNet. Results Skin tone data were collected from 128 patients. We assessed 167 features per skin location on each patient. We also collected over 2000 images from mobile phones measured in the same controlled environment. Skin tone data are linked with patients' EHR data, such as laboratory data, vital sign recordings, and demographic information. Conclusions Measuring different aspects of skin tone for each of the sixteen body locations and linking them with patients' EHR data could assist in the development of a more equitable AI model to combat disparities in healthcare associated with skin tone. A common data model format enables easy data federation with similar data from other sources, facilitating multicenter research on skin tone in healthcare. Description A prospectively collected EHR-linked skin tone measurements database in a common data model with emphasis on pulse oximetry disparities.
Collapse
|
6
|
Holzschuh JC, Mix M, Freitag MT, Hölscher T, Braune A, Kotzerke J, Vrachimis A, Doolan P, Ilhan H, Marinescu IM, Spohn SKB, Fechter T, Kuhn D, Gratzke C, Grosu R, Grosu AL, Zamboglou C. The impact of multicentric datasets for the automated tumor delineation in primary prostate cancer using convolutional neural networks on 18F-PSMA-1007 PET. Radiat Oncol 2024; 19:106. [PMID: 39113123 PMCID: PMC11304577 DOI: 10.1186/s13014-024-02491-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 07/17/2024] [Indexed: 08/11/2024] Open
Abstract
PURPOSE Convolutional Neural Networks (CNNs) have emerged as transformative tools in the field of radiation oncology, significantly advancing the precision of contouring practices. However, the adaptability of these algorithms across diverse scanners, institutions, and imaging protocols remains a considerable obstacle. This study aims to investigate the effects of incorporating institution-specific datasets into the training regimen of CNNs to assess their generalization ability in real-world clinical environments. Focusing on a data-centric analysis, the influence of varying multi- and single center training approaches on algorithm performance is conducted. METHODS nnU-Net is trained using a dataset comprising 161 18F-PSMA-1007 PET images collected from four distinct institutions (Freiburg: n = 96, Munich: n = 19, Cyprus: n = 32, Dresden: n = 14). The dataset is partitioned such that data from each center are systematically excluded from training and used solely for testing to assess the model's generalizability and adaptability to data from unfamiliar sources. Performance is compared through a 5-Fold Cross-Validation, providing a detailed comparison between models trained on datasets from single centers to those trained on aggregated multi-center datasets. Dice Similarity Score, Hausdorff distance and volumetric analysis are used as primary evaluation metrics. RESULTS The mixed training approach yielded a median DSC of 0.76 (IQR: 0.64-0.84) in a five-fold cross-validation, showing no significant differences (p = 0.18) compared to models trained with data exclusion from each center, which performed with a median DSC of 0.74 (IQR: 0.56-0.86). Significant performance improvements regarding multi-center training were observed for the Dresden cohort (multi-center median DSC 0.71, IQR: 0.58-0.80 vs. single-center 0.68, IQR: 0.50-0.80, p < 0.001) and Cyprus cohort (multi-center 0.74, IQR: 0.62-0.83 vs. single-center 0.72, IQR: 0.54-0.82, p < 0.01). While Munich and Freiburg also showed performance improvements with multi-center training, results showed no statistical significance (Munich: multi-center DSC 0.74, IQR: 0.60-0.80 vs. single-center 0.72, IQR: 0.59-0.82, p > 0.05; Freiburg: multi-center 0.78, IQR: 0.53-0.87 vs. single-center 0.71, IQR: 0.53-0.83, p = 0.23). CONCLUSION CNNs trained for auto contouring intraprostatic GTV in 18F-PSMA-1007 PET on a diverse dataset from multiple centers mostly generalize well to unseen data from other centers. Training on a multicentric dataset can improve performance compared to training exclusively with a single-center dataset regarding intraprostatic 18F-PSMA-1007 PET GTV segmentation. The segmentation performance of the same CNN can vary depending on the dataset employed for training and testing.
Collapse
Affiliation(s)
- Julius C Holzschuh
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Michael Mix
- Department of Nuclear Medicine, Faculty of Medicine, Medical Center - University of Freiburg, Freiburg, Germany
| | - Martin T Freitag
- Department of Nuclear Medicine, Faculty of Medicine, Medical Center - University of Freiburg, Freiburg, Germany
| | - Tobias Hölscher
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Anja Braune
- Department of Nuclear Medicine, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jörg Kotzerke
- Department of Nuclear Medicine, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Alexis Vrachimis
- Department of Nuclear Medicine, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Paul Doolan
- Department of Medical Physics, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Harun Ilhan
- Department of Nuclear Medicine, University Hospital - Ludwig-Maximilians-Universität, Munich, Germany
| | - Ioana M Marinescu
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Simon K B Spohn
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Division of Medical Physics, Department of Radiation Oncology, Faculty of Medicine, Medical Center-University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Dejan Kuhn
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Division of Medical Physics, Department of Radiation Oncology, Faculty of Medicine, Medical Center-University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Christian Gratzke
- Department of Urology, Medical Center-University of Freiburg, Freiburg, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering and Faculty of Informatics, Technical University of Vienna, Vienna, Austria
- Department of Computer Science, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Anca-Ligia Grosu
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - C Zamboglou
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| |
Collapse
|
7
|
Muralidharan V, Schamroth J, Youssef A, Celi LA, Daneshjou R. Applied artificial intelligence for global child health: Addressing biases and barriers. PLOS DIGITAL HEALTH 2024; 3:e0000583. [PMID: 39172772 PMCID: PMC11340888 DOI: 10.1371/journal.pdig.0000583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
Given the potential benefits of artificial intelligence and machine learning (AI/ML) within healthcare, it is critical to consider how these technologies can be deployed in pediatric research and practice. Currently, healthcare AI/ML has not yet adapted to the specific technical considerations related to pediatric data nor adequately addressed the specific vulnerabilities of children and young people (CYP) in relation to AI. While the greatest burden of disease in CYP is firmly concentrated in lower and middle-income countries (LMICs), existing applied pediatric AI/ML efforts are concentrated in a small number of high-income countries (HICs). In LMICs, use-cases remain primarily in the proof-of-concept stage. This narrative review identifies a number of intersecting challenges that pose barriers to effective AI/ML for CYP globally and explores the shifts needed to make progress across multiple domains. Child-specific technical considerations throughout the AI/ML lifecycle have been largely overlooked thus far, yet these can be critical to model effectiveness. Governance concerns are paramount, with suitable national and international frameworks and guidance required to enable the safe and responsible deployment of advanced technologies impacting the care of CYP and using their data. An ambitious vision for child health demands that the potential benefits of AI/Ml are realized universally through greater international collaboration, capacity building, strong oversight, and ultimately diffusing the AI/ML locus of power to empower researchers and clinicians globally. In order that AI/ML systems that do not exacerbate inequalities in pediatric care, teams researching and developing these technologies in LMICs must ensure that AI/ML research is inclusive of the needs and concerns of CYP and their caregivers. A broad, interdisciplinary, and human-centered approach to AI/ML is essential for developing tools for healthcare workers delivering care, such that the creation and deployment of ML is grounded in local systems, cultures, and clinical practice. Decisions to invest in developing and testing pediatric AI/ML in resource-constrained settings must always be part of a broader evaluation of the overall needs of a healthcare system, considering the critical building blocks underpinning effective, sustainable, and cost-efficient healthcare delivery for CYP.
Collapse
Affiliation(s)
- Vijaytha Muralidharan
- Department of Dermatology, Stanford University, Stanford, California, United States of America
| | - Joel Schamroth
- Faculty of Population Health Sciences, University College London, London, United Kingdom
| | - Alaa Youssef
- Stanford Center for Artificial Intelligence in Medicine and Imaging, Department of Radiology, Stanford University, Stanford, California, United States of America
| | - Leo A. Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California, United States of America
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States of America
| |
Collapse
|
8
|
Warren BE, Bilbily A, Gichoya JW, Conway A, Li B, Fawzy A, Barragán C, Jaberi A, Mafeld S. An Introductory Guide to Artificial Intelligence in Interventional Radiology: Part 1 Foundational Knowledge. Can Assoc Radiol J 2024; 75:558-567. [PMID: 38445497 DOI: 10.1177/08465371241236376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2024] Open
Abstract
Artificial intelligence (AI) is rapidly evolving and has transformative potential for interventional radiology (IR) clinical practice. However, formal training in AI may be limited for many clinicians and therefore presents a challenge for initial implementation and trust in AI. An understanding of the foundational concepts in AI may help familiarize the interventional radiologist with the field of AI, thus facilitating understanding and participation in the development and deployment of AI. A pragmatic classification system of AI based on the complexity of the model may guide clinicians in the assessment of AI. Finally, the current state of AI in IR and the patterns of implementation are explored (pre-procedural, intra-procedural, and post-procedural).
Collapse
Affiliation(s)
- Blair Edward Warren
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Toronto, ON, Canada
| | - Alexander Bilbily
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- 16 Bit Inc., Toronto, ON, Canada
- Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | | | - Aaron Conway
- Prince Charles Hospital, Queensland University of Technology, Brisbane, QLD, Australia
| | - Ben Li
- Division of Vascular Surgery, Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Aly Fawzy
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Camilo Barragán
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Toronto, ON, Canada
| | - Arash Jaberi
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Toronto, ON, Canada
| | - Sebastian Mafeld
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Toronto, ON, Canada
| |
Collapse
|
9
|
López-Úbeda P, Martín-Noguerol T, Díaz-Angulo C, Luna A. Evaluation of large language models performance against humans for summarizing MRI knee radiology reports: A feasibility study. Int J Med Inform 2024; 187:105443. [PMID: 38615509 DOI: 10.1016/j.ijmedinf.2024.105443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/20/2024] [Accepted: 03/29/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES This study addresses the critical need for accurate summarization in radiology by comparing various Large Language Model (LLM)-based approaches for automatic summary generation. With the increasing volume of patient information, accurately and concisely conveying radiological findings becomes crucial for effective clinical decision-making. Minor inaccuracies in summaries can lead to significant consequences, highlighting the need for reliable automated summarization tools. METHODS We employed two language models - Text-to-Text Transfer Transformer (T5) and Bidirectional and Auto-Regressive Transformers (BART) - in both fine-tuned and zero-shot learning scenarios and compared them with a Recurrent Neural Network (RNN). Additionally, we conducted a comparative analysis of 100 MRI report summaries, using expert human judgment and criteria such as coherence, relevance, fluency, and consistency, to evaluate the models against the original radiologist summaries. To facilitate this, we compiled a dataset of 15,508 retrospective knee Magnetic Resonance Imaging (MRI) reports from our Radiology Information System (RIS), focusing on the findings section to predict the radiologist's summary. RESULTS The fine-tuned models outperform the neural network and show superior performance in the zero-shot variant. Specifically, the T5 model achieved a Rouge-L score of 0.638. Based on the radiologist readers' study, the summaries produced by this model were found to be very similar to those produced by a radiologist, with about 70% similarity in fluency and consistency between the T5-generated summaries and the original ones. CONCLUSIONS Technological advances, especially in NLP and LLM, hold great promise for improving and streamlining the summarization of radiological findings, thus providing valuable assistance to radiologists in their work.
Collapse
Affiliation(s)
| | | | | | - Antonio Luna
- MRI Unit, Radiology Department, Health Time, Jaén, Spain.
| |
Collapse
|
10
|
Lyman GH, Kuderer NM. Artificial Intelligence in Cancer Clinical Research: II. Development and Validation of Clinical Prediction Models. Cancer Invest 2024; 42:447-451. [PMID: 38775011 DOI: 10.1080/07357907.2024.2354991] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Affiliation(s)
- Gary H Lyman
- Editor-in-Chief, Cancer Investigation Division of Public Health Sciences, Fred Hutchinson Cancer Center, Seattle, WA, USA
| | - Nicole M Kuderer
- Deputy Editor, Cancer Investigation Advanced Cancer Research Group, Kirkland, WA, USA
| |
Collapse
|
11
|
Nakayama LF, Restrepo D, Matos J, Ribeiro LZ, Malerbi FK, Celi LA, Regatieri CS. BRSET: A Brazilian Multilabel Ophthalmological Dataset of Retina Fundus Photos. PLOS DIGITAL HEALTH 2024; 3:e0000454. [PMID: 38991014 PMCID: PMC11239107 DOI: 10.1371/journal.pdig.0000454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 06/03/2024] [Indexed: 07/13/2024]
Abstract
INTRODUCTION The Brazilian Multilabel Ophthalmological Dataset (BRSET) addresses the scarcity of publicly available ophthalmological datasets in Latin America. BRSET comprises 16,266 color fundus retinal photos from 8,524 Brazilian patients, aiming to enhance data representativeness, serving as a research and teaching tool. It contains sociodemographic information, enabling investigations into differential model performance across demographic groups. METHODS Data from three São Paulo outpatient centers yielded demographic and medical information from electronic records, including nationality, age, sex, clinical history, insulin use, and duration of diabetes diagnosis. A retinal specialist labeled images for anatomical features (optic disc, blood vessels, macula), quality control (focus, illumination, image field, artifacts), and pathologies (e.g., diabetic retinopathy). Diabetic retinopathy was graded using International Clinic Diabetic Retinopathy and Scottish Diabetic Retinopathy Grading. Validation used a ConvNext model trained during 50 epochs using a weighted cross entropy loss to avoid overfitting, with 70% training (20% validation), and 30% testing subsets. Performance metrics included area under the receiver operating curve (AUC) and Macro F1-score. Saliency maps were calculated for interpretability. RESULTS BRSET comprises 65.1% Canon CR2 and 34.9% Nikon NF5050 images. 61.8% of the patients are female, and the average age is 57.6 (± 18.26) years. Diabetic retinopathy affected 15.8% of patients, across a spectrum of disease severity. Anatomically, 20.2% showed abnormal optic discs, 4.9% abnormal blood vessels, and 28.8% abnormal macula. A ConvNext V2 model was trained and evaluated BRSET in four prediction tasks: "binary diabetic retinopathy diagnosis (Normal vs Diabetic Retinopathy)" (AUC: 97, F1: 89); "3 class diabetic retinopathy diagnosis (Normal, Proliferative, Non-Proliferative)" (AUC: 97, F1: 82); "diabetes diagnosis" (AUC: 91, F1: 83); "sex classification" (AUC: 87, F1: 70). DISCUSSION BRSET is the first multilabel ophthalmological dataset in Brazil and Latin America. It provides an opportunity for investigating model biases by evaluating performance across demographic groups. The model performance of three prediction tasks demonstrates the value of the dataset for external validation and for teaching medical computer vision to learners in Latin America using locally relevant data sources.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Telematics Department, University of Cauca, Popayán, Cauca, Colombia
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering of University of Porto, Porto, Portugal
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Fernando Korn Malerbi
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Caio Saito Regatieri
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| |
Collapse
|
12
|
Affiliation(s)
- Gary H Lyman
- Editor-in-Chief Division of Public Health Sciences, Fred Hutchinson Cancer Center, Seattle, WA, USA
| | | |
Collapse
|
13
|
Moawad AW, Janas A, Baid U, Ramakrishnan D, Saluja R, Ashraf N, Jekel L, Amiruddin R, Adewole M, Albrecht J, Anazodo U, Aneja S, Anwar SM, Bergquist T, Calabrese E, Chiang V, Chung V, Conte GMM, Dako F, Eddy J, Ezhov I, Familiar A, Farahani K, Iglesias JE, Jiang Z, Johanson E, Kazerooni AF, Kofler F, Krantchev K, LaBella D, Van Leemput K, Li HB, Linguraru MG, Link KE, Liu X, Maleki N, Meier Z, Menze BH, Moy H, Osenberg K, Piraud M, Reitman Z, Shinohara RT, Tahon NH, Nada A, Velichko YS, Wang C, Wiestler B, Wiggins W, Shafique U, Willms K, Avesta A, Bousabarah K, Chakrabarty S, Gennaro N, Holler W, Kaur M, LaMontagne P, Lin M, Lost J, Marcus DS, Maresca R, Merkaj S, Nada A, Pedersen GC, von Reppert M, Sotiras A, Teytelboym O, Tillmans N, Westerhoff M, Youssef A, Godfrey D, Floyd S, Rauschecker A, Villanueva-Meyer J, Pflüger I, Cho J, Bendszus M, Brugnara G, Cramer J, Perez-Carillo GJG, Johnson DR, Kam A, Kwan BYM, Lai L, Lall NU, Memon F, Patro SN, Petrovic B, So TY, Thompson G, Wu L, Schrickel EB, Bansal A, Barkhof F, Besada C, Chu S, Druzgal J, Dusoi A, Farage L, Feltrin F, Fong A, Fung SH, Gray RI, Ikuta I, Iv M, Postma AA, Mahajan A, Joyner D, Krumpelman C, Letourneau-Guillon L, Lincoln CM, Maros ME, Miller E, Morón F, Nimchinsky EA, Ozsarlak O, Patel U, Rohatgi S, Saha A, Sayah A, Schwartz ED, Shih R, Shiroishi MS, Small JE, Tanwar M, Valerie J, Weinberg BD, White ML, Young R, Zohrabian VM, Azizova A, Brüßeler MMT, Fehringer P, Ghonim M, Ghonim M, Gkampenis A, Okar A, Pasquini L, Sharifi Y, Singh G, Sollmann N, Soumala T, Taherzadeh M, Yordanov N, Vollmuth P, Foltyn-Dumitru M, Malhotra A, Abayazeed AH, Dellepiane F, Lohmann P, Pérez-García VM, Elhalawani H, Al-Rubaiey S, Armindo RD, Ashraf K, Asla MM, Badawy M, Bisschop J, Lomer NB, Bukatz J, Chen J, Cimflova P, Corr F, Crawley A, Deptula L, Elakhdar T, Shawali IH, Faghani S, Frick A, Gulati V, Haider MA, Hierro F, Dahl RH, Jacobs SM, Hsieh KCJ, Kandemirli SG, Kersting K, Kida L, Kollia S, Koukoulithras I, Li X, Abouelatta A, Mansour A, Maria-Zamfirescu RC, Marsiglia M, Mateo-Camacho YS, McArthur M, McDonnell O, McHugh M, Moassefi M, Morsi SM, Muntenu A, Nandolia KK, Naqvi SR, Nikanpour Y, Alnoury M, Nouh AMA, Pappafava F, Patel MD, Petrucci S, Rawie E, Raymond S, Roohani B, Sabouhi S, Sanchez-Garcia LM, Shaked Z, Suthar PP, Altes T, Isufi E, Dhermesh Y, Gass J, Thacker J, Tarabishy AR, Turner B, Vacca S, Vilanilam GK, Warren D, Weiss D, Willms K, Worede F, Yousry S, Lerebo W, Aristizabal A, Karargyris A, Kassem H, Pati S, Sheller M, Bakas S, Rudie JD, Aboian M. The Brain Tumor Segmentation - Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. ARXIV 2024:arXiv:2306.00838v2. [PMID: 37396600 PMCID: PMC10312806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The translation of AI-generated brain metastases (BM) segmentation into clinical practice relies heavily on diverse, high-quality annotated medical imaging datasets. The BraTS-METS 2023 challenge has gained momentum for testing and benchmarking algorithms using rigorously annotated internationally compiled real-world datasets. This study presents the results of the segmentation challenge and characterizes the challenging cases that impacted the performance of the winning algorithms. Untreated brain metastases on standard anatomic MRI sequences (T1, T2, FLAIR, T1PG) from eight contributed international datasets were annotated in stepwise method: published UNET algorithms, student, neuroradiologist, final approver neuroradiologist. Segmentations were ranked based on lesion-wise Dice and Hausdorff distance (HD95) scores. False positives (FP) and false negatives (FN) were rigorously penalized, receiving a score of 0 for Dice and a fixed penalty of 374 for HD95. The mean scores for the teams were calculated. Eight datasets comprising 1303 studies were annotated, with 402 studies (3076 lesions) released on Synapse as publicly available datasets to challenge competitors. Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing. Segmentation accuracy was measured as rank across subjects, with the winning team achieving a LesionWise mean score of 7.9. The Dice score for the winning team was 0.65 ± 0.25. Common errors among the leading teams included false negatives for small lesions and misregistration of masks in space. The Dice scores and lesion detection rates of all algorithms diminished with decreasing tumor size, particularly for tumors smaller than 100 mm3. In conclusion, algorithms for BM segmentation require further refinement to balance high sensitivity in lesion detection with the minimization of false positives and negatives. The BraTS-METS 2023 challenge successfully curated well-annotated, diverse datasets and identified common errors, facilitating the translation of BM segmentation across varied clinical environments and providing personalized volumetric reports to patients undergoing BM treatment.
Collapse
Affiliation(s)
| | - Anastasia Janas
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Ujjwal Baid
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Divya Ramakrishnan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Rachit Saluja
- Department of Electical and Computer Engineering, Cornell University and Cornell Tech, New York, NY, USA
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
| | - Nader Ashraf
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Leon Jekel
- DKFZ Division of Translational Neurooncology at the WTZ, German Cancer Consortium, DKTK Partner Site, University Hospital Essen, Essen, Germany
| | - Raisa Amiruddin
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Maruf Adewole
- Medical Artificial Intelligence Lab, Crestview Radiology, Lagos, Nigeria
| | | | - Udunna Anazodo
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Medical Artificial Intelligence (MAI) lab, Crestview Radiology, Lagos, Nigeria
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Syed Muhammad Anwar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
| | | | - Evan Calabrese
- Department of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Veronica Chiang
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Farouk Dako
- Center for Global Health, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | | | - Ivan Ezhov
- Department of Informatics, Technical University Munich, Germany
| | - Ariana Familiar
- Children’s Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Zhifan Jiang
- Children’s National Hospital, Washington, D.C., USA
| | - Elaine Johanson
- PrecisionFDA, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Anahita Fathi Kazerooni
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
- Division of Neurosurgery, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Center for Data-Driven Discovery in Biomedicine, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Kiril Krantchev
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Dominic LaBella
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
| | - Hongwei Bran Li
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
- Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Washington, D.C., USA
| | | | - Xinyang Liu
- Children’s National Hospital, Washington, D.C., USA
| | - Nazanin Maleki
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Bjoern H Menze
- Biomedical Image Analysis & Machine Learning, Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Harrison Moy
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Osenberg
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Russel Takeshi Shinohara
- Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Yuri S. Velichko
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | - Chunhao Wang
- Duke University School of Medicine, Durham, NC, USA
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich, Munich, Germany
| | | | - Umber Shafique
- Department of Radiology and Imaging Sciences, Indiana University, Indianapolis, IN, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA
- GE HealthCare, San Ramon, CA, USA
| | - Nicolo Gennaro
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | | | - Manpreet Kaur
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Jan Lost
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Daniel S. Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Ryan Maresca
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Sarah Merkaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Marc von Reppert
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Institute for Informatics, Data Science & Biostatistics, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Niklas Tillmans
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | | | - Scott Floyd
- Duke University Medical Center, Durham, NC, USA
| | - Andreas Rauschecker
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Javier Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Jaeyoung Cho
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Justin Cramer
- Department of Radiology, Mayo Clinic, Phoenix, AZ, USA
| | | | | | - Anthony Kam
- Loyola University Medical Center, Hines, IL, USA
| | | | - Lillian Lai
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Fatima Memon
- Carolina Radiology Associates, Myrtle Beach, SC, USA
- McLeod Regional Medical Center, Florence, SC, USA
- Medical University of South Carolina, Charleston, SC, USA
| | | | | | - Tiffany Y. So
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Gerard Thompson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Department of Clinical Neurosciences, NHS Lothian, Edinburgh, United Kingdom
| | - Lei Wu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - E. Brooke Schrickel
- Department of Radiology, Ohio State University College of medicine, Columbus, OH, USA
| | - Anu Bansal
- Albert Einstein Medical Center, Hartford, CT, USA
| | - Frederik Barkhof
- Amsterdam UMC, location Vrije Universiteir, the Netherlands
- University College London, United Kingdom
| | | | - Sammy Chu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Jason Druzgal
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | | | - Luciano Farage
- Centro Universitario Euro-Americana (UNIEURO), Brasília, DF, Brazil
| | - Fabricio Feltrin
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Amy Fong
- Southern District Health Board, Dunedin, New Zealand
| | - Steve H. Fung
- Department of Radiology, Houston Methodist, Houston, TX, USA
| | - R. Ian Gray
- University of Tennessee medical Center, Knoxville, TN, USA
| | - Ichiro Ikuta
- Mayo Clinic, Department of Radiology, Section of Neuroradiology, Phoenix, AZ, USA
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Alida A. Postma
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands
- Mental Health and Neuroscience research institute, Maastricht University, Maastricht, the Netherlands
| | - Amit Mahajan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - David Joyner
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA
| | - Chase Krumpelman
- Department of Radiology, University of Northwestern, Chicago, IL, USA
| | | | | | - Mate E. Maros
- Departments of Neuroradiology & Biomedical Informatics, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Elka Miller
- Department of Diagnostic and Interventional Radiology, SickKids Hospital, University of Toronto, Canada
| | - Fanny Morón
- Department of Radiology, Baylor College of medicine, Houston, TX, USA
| | | | - Ozkan Ozsarlak
- Department of Radiology, AZ Monica, Antwerp Area, Belgium
| | - Uresh Patel
- Medicolegal Imaging Experts LLC, Mercer Island, WA, USA
| | - Saurabh Rohatgi
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Atin Saha
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Weill Cornell Medical College, New York, NY, USA
| | - Anousheh Sayah
- MedStar Georgetown University Hospital, Washington, D.C., USA
| | - Eric D. Schwartz
- Department of Radiology, St.Elizabeth’s Medical Center, Boston, MA, USA
- Department of Radiology, Tufts University School of Medicine, Boston, MA, USA
| | - Robert Shih
- Walter Reed National Military Medical Center, Bethesda, MD, USA
| | | | | | | | - Jewels Valerie
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Brent D. Weinberg
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | | | - Robert Young
- George Washington University, Washington, D.C., USA
| | - Vahe M. Zohrabian
- Northwell Health, Zucker Hofstra School of Medicine at Northwell, North Shore University Hospital, Hempstead, New York, NY, USA
| | - Aynur Azizova
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | | | - Pascal Fehringer
- Faculty of Medicine, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | - Mohanad Ghonim
- Department of Radiology, Ain Shams University, Cairo, Egypt
| | - Mohamed Ghonim
- Department of Radiology, Ain Shams University, Cairo - Egypt
| | | | | | - Luca Pasquini
- Radiology Department, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| | | | - Gagandeep Singh
- Columbia University Irving Medical Center, New York, NY, USA
| | - Nico Sollmann
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | | | | | - Nikolay Yordanov
- Faculty of Medicine, Medical University - Sofia, Sofia, Bulgaria
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Department of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Francesco Dellepiane
- Functional and Interventional Neuroradiology Unit, Bambino Gesù Children’s Hospital, Rome, Italy
| | - Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-4), Research Center Juelich, Juelich, Germany
- Department of Nuclear Medicine, University Hospital RWTH Aachen, Aachen, Germany
| | - Víctor M. Pérez-García
- Mathematical Oncology Laboratory & Department of Mathematics, University of Castilla-La Mancha, Spain
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanaria Al-Rubaiey
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Rui Duarte Armindo
- Department of Neuroradiology, Western Lisbon Hospital Centre (CHLO), Portugal
| | | | | | - Mohamed Badawy
- Diagnostic Radiology Department, Wayne State University, Detroit, MI
| | - Jeroen Bisschop
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | | | - Jan Bukatz
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Jim Chen
- Department of Radiology/Division of Neuroradiology, San Diego Veterans Administration Medical Center/UC San Diego Health System, San Diego, CA, USA
| | - Petra Cimflova
- Department of Radiology, University of Calgary, Calgary, Canada
| | - Felix Corr
- EDU Institute of Higher Education, Villa Bighi, Chaplain’s House, Kalkara, Malta
| | | | - Lisa Deptula
- Ross University School of Medicine, Bridgetown, Barbados
| | | | | | | | - Alexandra Frick
- Department of Neurosurgery, Vivantes Klinikum Neukölln, Berlin, Germany
| | | | | | - Fátima Hierro
- Neuroradiology Department, Pedro Hispano Hospital, Matosinhos, Portugal
| | - Rasmus Holmboe Dahl
- Department of Radiology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Sarah Maria Jacobs
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Sedat G. Kandemirli
- Department of Radiology, University of Iowa Hospital and Clinics, Iowa City, IA, USA
| | - Katharina Kersting
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Laura Kida
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Sofia Kollia
- National and Kapodistrian University of Athens, School of Medicine, Athens, Greece
| | | | - Xiao Li
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Ahmed Abouelatta
- Department of Diagnostic and Interventional Radiology, Cairo University, Cairo, Egypt
| | | | - Ruxandra-Catrinel Maria-Zamfirescu
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Marcela Marsiglia
- Department of Radiology, Brigham and Women’s Hospital, Massachusetts General Hospital, Boston, MA, USA
| | | | - Mark McArthur
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | | | - Maire McHugh
- Department of Radiology Manchester NHS Foundation Trust, North West School of Radiology, Manchester, United Kingdom
| | - Mana Moassefi
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | - Khanak K. Nandolia
- Department of Radiodiagnosis, All India Institute of Medical Sciences Rishikesh, India
| | - Syed Raza Naqvi
- Windsor Regional Hospital, Western University, Ontario, Canada
| | - Yalda Nikanpour
- Artificial Intelligence & Informatics, Mayo Clinic, Rochester, MN, USA
| | - Mostafa Alnoury
- Department of Radiology, University of Pennsylvania, PA, USA
| | | | - Francesca Pappafava
- Department of Medicine and Surgery, Università degli Studi di Perugia, Italy
| | - Markand D. Patel
- Department of Neuroradiology, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Samantha Petrucci
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Eric Rawie
- Department of Radiology, Michigan Medicine, Ann Arbor, MI, USA
| | - Scott Raymond
- Department of Radiology, University of Vermont Medical Center, Burlington, VT, USA
| | - Borna Roohani
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Sadeq Sabouhi
- Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Zoe Shaked
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | | | - Talissa Altes
- Radiology Department, University of Missouri, Columbia, MO, USA
| | | | | | | | | | - Abdul Rahman Tarabishy
- Department of NeuroRadiology, Rockefeller Neuroscience Institute, West Virginia University. Morgantown, WV, USA
| | | | - Sebastiano Vacca
- University of Cagliari, School of Medicine and Surgery, Cagliari, Italy
| | - George K. Vilanilam
- Department of Radiology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Daniel Warren
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - David Weiss
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Fikadu Worede
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Wondwossen Lerebo
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | | | | | - Sarthak Pati
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Center For Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA
- Medical Working Group, MLCommons, San Fransisco, CA, USA
| | | | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Neurological Surgery, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Jeffrey D. Rudie
- Department of Radiology, University of California San Diego, CA, USA
- Department of Radiology, Scripps Clinic Medical Group, CA, USA
| | - Mariam Aboian
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
14
|
Iqbal U, Hsu YHE, Celi LA, Li YCJ. Artificial intelligence in healthcare: Opportunities come with landmines. BMJ Health Care Inform 2024; 31:e101086. [PMID: 38839426 PMCID: PMC11163668 DOI: 10.1136/bmjhci-2024-101086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 05/02/2024] [Indexed: 06/07/2024] Open
Affiliation(s)
- Usman Iqbal
- School of Population Health, Faculty of Medicine and Health, University of New South Wales (UNSW), Sydney, NSW, Australia
- Global Health and Health Security Department, College of Public Health, Taipei Medical University, Taipei, Taiwan
- International Center for Health Information and Technology, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - Yi-Hsin Elsa Hsu
- Biotechnology Executive Master's Degree in Business Administration (BioTech EMBA), Taipei Medical University, Taipei, Taiwan
- School of Healthcare Administration, College of Management, Taipei Medical University, Taipei, Taiwan
- International Ph.D. Program in BioTech and Healthcare Management, College of Management, Taipei Medical University, Taipei, Taiwan
- Department of Humanities in Medicine, College of Medicine, School of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Yu-Chuan Jack Li
- International Center for Health Information and Technology, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Graduate Institute of Biomedical Informatics, College of Medical Science & Technology, Taipei Medical University, Taipei, Taiwan
- Department of Dermatology, Taipei Municipal Wanfang Hospital, Taipei, Taiwan
- The International Medical Informatics Association (IMIA), Zürich, Switzerland
| |
Collapse
|
15
|
Ose B, Sattar Z, Gupta A, Toquica C, Harvey C, Noheria A. Artificial Intelligence Interpretation of the Electrocardiogram: A State-of-the-Art Review. Curr Cardiol Rep 2024; 26:561-580. [PMID: 38753291 DOI: 10.1007/s11886-024-02062-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/17/2024] [Indexed: 06/26/2024]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is transforming electrocardiography (ECG) interpretation. AI diagnostics can reach beyond human capabilities, facilitate automated access to nuanced ECG interpretation, and expand the scope of cardiovascular screening in the population. AI can be applied to the standard 12-lead resting ECG and single-lead ECGs in external monitors, implantable devices, and direct-to-consumer smart devices. We summarize the current state of the literature on AI-ECG. RECENT FINDINGS Rhythm classification was the first application of AI-ECG. Subsequently, AI-ECG models have been developed for screening structural heart disease including hypertrophic cardiomyopathy, cardiac amyloidosis, aortic stenosis, pulmonary hypertension, and left ventricular systolic dysfunction. Further, AI models can predict future events like development of systolic heart failure and atrial fibrillation. AI-ECG exhibits potential in acute cardiac events and non-cardiac applications, including acute pulmonary embolism, electrolyte abnormalities, monitoring drugs therapy, sleep apnea, and predicting all-cause mortality. Many AI models in the domain of cardiac monitors and smart watches have received Food and Drug Administration (FDA) clearance for rhythm classification, while others for identification of cardiac amyloidosis, pulmonary hypertension and left ventricular dysfunction have received breakthrough device designation. As AI-ECG models continue to be developed, in addition to regulatory oversight and monetization challenges, thoughtful clinical implementation to streamline workflows, avoiding information overload and overwhelming of healthcare systems with false positive results is necessary. Research to demonstrate and validate improvement in healthcare efficiency and improved patient outcomes would be required before widespread adoption of any AI-ECG model.
Collapse
Affiliation(s)
- Benjamin Ose
- The University of Kansas School of Medicine, Kansas City, KS, USA
| | - Zeeshan Sattar
- Division of General and Hospital Medicine, The University of Kansas Medical Center, Kansas City, KS, USA
| | - Amulya Gupta
- Department of Cardiovascular Medicine, The University of Kansas Medical Center, Kansas City, KS, USA
- Program for AI & Research in Cardiovascular Medicine (PARC), The University of Kansas Medical Center, Kansas City, KS, USA
| | | | - Chris Harvey
- Department of Cardiovascular Medicine, The University of Kansas Medical Center, Kansas City, KS, USA
- Program for AI & Research in Cardiovascular Medicine (PARC), The University of Kansas Medical Center, Kansas City, KS, USA
| | - Amit Noheria
- Department of Cardiovascular Medicine, The University of Kansas Medical Center, Kansas City, KS, USA.
- Program for AI & Research in Cardiovascular Medicine (PARC), The University of Kansas Medical Center, Kansas City, KS, USA.
| |
Collapse
|
16
|
Nolin-Lapalme A, Corbin D, Tastet O, Avram R, Hussin JG. Advancing Fairness in Cardiac Care: Strategies for Mitigating Bias in Artificial Intelligence Models Within Cardiology. Can J Cardiol 2024:S0828-282X(24)00357-X. [PMID: 38735528 DOI: 10.1016/j.cjca.2024.04.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 04/03/2024] [Accepted: 04/22/2024] [Indexed: 05/14/2024] Open
Abstract
In the dynamic field of medical artificial intelligence (AI), cardiology stands out as a key area for its technological advancements and clinical application. In this review we explore the complex issue of data bias, specifically addressing those encountered during the development and implementation of AI tools in cardiology. We dissect the origins and effects of these biases, which challenge their reliability and widespread applicability in health care. Using a case study, we highlight the complexities involved in addressing these biases from a clinical viewpoint. The goal of this review is to equip researchers and clinicians with the practical knowledge needed to identify, understand, and mitigate these biases, advocating for the creation of AI solutions that are not just technologically sound, but also fair and effective for all patients.
Collapse
Affiliation(s)
- Alexis Nolin-Lapalme
- Department of Medicine, Montreal Heart Institute, Montreal, Quebec, Canada; Faculté de Médecine, Université de Montréal, Montreal, Quebec, Canada; Mila - Québec AI Institute, Montreal, Quebec, Canada; Heartwise (heartwise.ai), Montreal Heart Institute, Montreal, Quebec, Canada.
| | - Denis Corbin
- Department of Medicine, Montreal Heart Institute, Montreal, Quebec, Canada
| | - Olivier Tastet
- Department of Medicine, Montreal Heart Institute, Montreal, Quebec, Canada
| | - Robert Avram
- Department of Medicine, Montreal Heart Institute, Montreal, Quebec, Canada; Faculté de Médecine, Université de Montréal, Montreal, Quebec, Canada; Heartwise (heartwise.ai), Montreal Heart Institute, Montreal, Quebec, Canada
| | - Julie G Hussin
- Department of Medicine, Montreal Heart Institute, Montreal, Quebec, Canada; Faculté de Médecine, Université de Montréal, Montreal, Quebec, Canada; Mila - Québec AI Institute, Montreal, Quebec, Canada
| |
Collapse
|
17
|
Garbarino S, Bragazzi NL. Evaluating the effectiveness of artificial intelligence-based tools in detecting and understanding sleep health misinformation: Comparative analysis using Google Bard and OpenAI ChatGPT-4. J Sleep Res 2024:e14210. [PMID: 38577714 DOI: 10.1111/jsr.14210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 03/26/2024] [Accepted: 03/28/2024] [Indexed: 04/06/2024]
Abstract
This study evaluates the performance of two major artificial intelligence-based tools (ChatGPT-4 and Google Bard) in debunking sleep-related myths. More in detail, the present research assessed 20 sleep misconceptions using a 5-point Likert scale for falseness and public health significance, comparing responses of artificial intelligence tools with expert opinions. The results indicated that Google Bard correctly identified 19 out of 20 statements as false (95.0% accuracy), not differing from ChatGPT-4 (85.0% accuracy, Fisher's exact test p = 0.615). Google Bard's ratings of the falseness of the sleep misconceptions averaged 4.25 ± 0.70, showing a moderately negative skewness (-0.42) and kurtosis (-0.83), and suggesting a distribution with fewer extreme values compared with ChatGPT-4. In assessing public health significance, Google Bard's mean score was 2.4 ± 0.80, with skewness and kurtosis of 0.36 and -0.07, respectively, indicating a more normal distribution compared with ChatGPT-4. The inter-rater agreement between Google Bard and sleep experts had an intra-class correlation coefficient of 0.58 for falseness and 0.69 for public health significance, showing moderate alignment (p = 0.065 and p = 0.014, respectively). Text-mining analysis revealed Google Bard's focus on practical advice, while ChatGPT-4 concentrated on theoretical aspects of sleep. The readability analysis suggested Google Bard's responses were more accessible, aligning with 8th-grade level material, versus ChatGPT-4's 12th-grade level complexity. The study demonstrates the potential of artificial intelligence in public health education, especially in sleep health, and underscores the importance of accurate, reliable artificial intelligence-generated information, calling for further collaboration between artificial intelligence developers, sleep health professionals and educators to enhance the effectiveness of sleep health promotion.
Collapse
Affiliation(s)
- Sergio Garbarino
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal, Child Sciences (DINOGMI), University of Genoa, Genoa, Italy
- Post-Graduate School of Occupational Health, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Nicola Luigi Bragazzi
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal, Child Sciences (DINOGMI), University of Genoa, Genoa, Italy
- Laboratory for Industrial and Applied Mathematics (LIAM), Department of Mathematics and Statistics, York University, Toronto, Ontario, Canada
- Human Nutrition Unit (HNU), Department of Food and Drugs, University of Parma, Parma, Italy
| |
Collapse
|
18
|
Afreen S, Krohannon A, Purkayastha S, Janga SC. Datawiz-IN: Summer Research Experience for Health Data Science Training. RESEARCH SQUARE 2024:rs.3.rs-4132507. [PMID: 38585996 PMCID: PMC10996780 DOI: 10.21203/rs.3.rs-4132507/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Background Good science necessitates diverse perspectives to guide its progress. This study introduces Datawiz-IN, an educational initiative that fosters diversity and inclusion in AI skills training and research. Supported by a National Institutes of Health R25 grant from the National Library of Medicine, Datawiz-IN provided a comprehensive data science and machine learning research experience to students from underrepresented minority groups in medicine and computing. Methods The program evaluation triangulated quantitative and qualitative data to measure representation, innovation, and experience. Diversity gains were quantified using demographic data analysis. Computational projects were systematically reviewed for research productivity. A mixed-methods survey gauged participant perspectives on skills gained, support quality, challenges faced, and overall sentiments. Results The first cohort of 14 students in Summer 2023 demonstrated quantifiable increases in representation, with greater participation of women and minorities, evidencing the efficacy of proactive efforts to engage talent typically excluded from these fields. The student interns conducted innovative projects that elucidated disease mechanisms, enhanced clinical decision support systems, and analyzed health disparities. Conclusion By illustrating how purposeful inclusion catalyzes innovation, Datawiz-IN offers a model for developing AI systems and research that reflect true diversity. Realizing the full societal benefits of AI requires sustaining pathways for historically excluded voices to help shape the field.
Collapse
Affiliation(s)
- Sadia Afreen
- Department of BioHealth Informatics, Indiana University - Purdue University Indianapolis, Indianapolis, 46202, IN, USA
| | - Alexander Krohannon
- Department of BioHealth Informatics, Indiana University - Purdue University Indianapolis, Indianapolis, 46202, IN, USA
| | - Saptarshi Purkayastha
- Department of BioHealth Informatics, Indiana University - Purdue University Indianapolis, Indianapolis, 46202, IN, USA
| | - Sarath Chandra Janga
- Department of BioHealth Informatics, Indiana University - Purdue University Indianapolis, Indianapolis, 46202, IN, USA
| |
Collapse
|
19
|
Al Mohammad B, Aldaradkeh A, Gharaibeh M, Reed W. Assessing radiologists' and radiographers' perceptions on artificial intelligence integration: opportunities and challenges. Br J Radiol 2024; 97:763-769. [PMID: 38273675 PMCID: PMC11027289 DOI: 10.1093/bjr/tqae022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 09/30/2023] [Accepted: 01/21/2024] [Indexed: 01/27/2024] Open
Abstract
OBJECTIVES The objective of this study was to evaluate radiologists' and radiographers' opinions and perspectives on artificial intelligence (AI) and its integration into the radiology department. Additionally, we investigated the most common challenges and barriers that radiologists and radiographers face when learning about AI. METHODS A nationwide, online descriptive cross-sectional survey was distributed to radiologists and radiographers working in hospitals and medical centres from May 29, 2023 to July 30, 2023. The questionnaire examined the participants' opinions, feelings, and predictions regarding AI and its applications in the radiology department. Descriptive statistics were used to report the participants' demographics and responses. Five-points Likert-scale data were reported using divergent stacked bar graphs to highlight any central tendencies. RESULTS Responses were collected from 258 participants, revealing a positive attitude towards implementing AI. Both radiologists and radiographers predicted breast imaging would be the subspecialty most impacted by the AI revolution. MRI, mammography, and CT were identified as the primary modalities with significant importance in the field of AI application. The major barrier encountered by radiologists and radiographers when learning about AI was the lack of mentorship, guidance, and support from experts. CONCLUSION Participants demonstrated a positive attitude towards learning about AI and implementing it in the radiology practice. However, radiologists and radiographers encounter several barriers when learning about AI, such as the absence of experienced professionals support and direction. ADVANCES IN KNOWLEDGE Radiologists and radiographers reported several barriers to AI learning, with the most significant being the lack of mentorship and guidance from experts, followed by the lack of funding and investment in new technologies.
Collapse
Affiliation(s)
- Badera Al Mohammad
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Afnan Aldaradkeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan
| | - Monther Gharaibeh
- Department of Special Surgery, Faculty of Medicine, The Hashemite University, Zarqa 13133, Jordan
| | - Warren Reed
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney 2006, Sydney, NSW, Australia
| |
Collapse
|
20
|
Vrudhula A, Kwan AC, Ouyang D, Cheng S. Machine Learning and Bias in Medical Imaging: Opportunities and Challenges. Circ Cardiovasc Imaging 2024; 17:e015495. [PMID: 38377237 PMCID: PMC10883605 DOI: 10.1161/circimaging.123.015495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Bias in health care has been well documented and results in disparate and worsened outcomes for at-risk groups. Medical imaging plays a critical role in facilitating patient diagnoses but involves multiple sources of bias including factors related to access to imaging modalities, acquisition of images, and assessment (ie, interpretation) of imaging data. Machine learning (ML) applied to diagnostic imaging has demonstrated the potential to improve the quality of imaging-based diagnosis and the precision of measuring imaging-based traits. Algorithms can leverage subtle information not visible to the human eye to detect underdiagnosed conditions or derive new disease phenotypes by linking imaging features with clinical outcomes, all while mitigating cognitive bias in interpretation. Importantly, however, the application of ML to diagnostic imaging has the potential to either reduce or propagate bias. Understanding the potential gain as well as the potential risks requires an understanding of how and what ML models learn. Common risks of propagating bias can arise from unbalanced training, suboptimal architecture design or selection, and uneven application of models. Notwithstanding these risks, ML may yet be applied to improve gain from imaging across all 3A's (access, acquisition, and assessment) for all patients. In this review, we present a framework for understanding the balance of opportunities and challenges for minimizing bias in medical imaging, how ML may improve current approaches to imaging, and what specific design considerations should be made as part of efforts to maximize the quality of health care for all.
Collapse
Affiliation(s)
- Amey Vrudhula
- Icahn School of Medicine at Mount Sinai, New York
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
- Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center
| |
Collapse
|
21
|
Nakayama LF, Restrepo D, Matos J, Ribeiro LZ, Malerbi FK, Celi LA, Regatieri CS. BRSET: A Brazilian Multilabel Ophthalmological Dataset of Retina Fundus Photos. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.01.23.24301660. [PMID: 38343827 PMCID: PMC10854338 DOI: 10.1101/2024.01.23.24301660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Introduction The Brazilian Multilabel Ophthalmological Dataset (BRSET) addresses the scarcity of publicly available ophthalmological datasets in Latin America. BRSET comprises 16,266 color fundus retinal photos from 8,524 Brazilian patients, aiming to enhance data representativeness, serving as a research and teaching tool. It contains sociodemographic information, enabling investigations into differential model performance across demographic groups. Methods Data from three São Paulo outpatient centers yielded demographic and medical information from electronic records, including nationality, age, sex, clinical history, insulin use, and duration of diabetes diagnosis. A retinal specialist labeled images for anatomical features (optic disc, blood vessels, macula), quality control (focus, illumination, image field, artifacts), and pathologies (e.g., diabetic retinopathy). Diabetic retinopathy was graded using International Clinic Diabetic Retinopathy and Scottish Diabetic Retinopathy Grading. Validation used Dino V2 Base for feature extraction, with 70% training and 30% testing subsets. Support Vector Machines (SVM) and Logistic Regression (LR) were employed with weighted training. Performance metrics included area under the receiver operating curve (AUC) and Macro F1-score. Results BRSET comprises 65.1% Canon CR2 and 34.9% Nikon NF5050 images. 61.8% of the patients are female, and the average age is 57.6 years. Diabetic retinopathy affected 15.8% of patients, across a spectrum of disease severity. Anatomically, 20.2% showed abnormal optic discs, 4.9% abnormal blood vessels, and 28.8% abnormal macula. Models were trained on BRSET in three prediction tasks: "diabetes diagnosis"; "sex classification"; and "diabetic retinopathy diagnosis". Discussion BRSET is the first multilabel ophthalmological dataset in Brazil and Latin America. It provides an opportunity for investigating model biases by evaluating performance across demographic groups. The model performance of three prediction tasks demonstrates the value of the dataset for external validation and for teaching medical computer vision to learners in Latin America using locally relevant data sources.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Telematics Department, University of Cauca, Popayán, Cauca, Colombia
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering of University of Porto, Porto, Portugal
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Fernando Korn Malerbi
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Caio Saito Regatieri
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| |
Collapse
|
22
|
Terranova C, Cestonaro C, Fava L, Cinquetti A. AI and professional liability assessment in healthcare. A revolution in legal medicine? Front Med (Lausanne) 2024; 10:1337335. [PMID: 38259835 PMCID: PMC10800912 DOI: 10.3389/fmed.2023.1337335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024] Open
Abstract
The adoption of advanced artificial intelligence (AI) systems in healthcare is transforming the healthcare-delivery landscape. Artificial intelligence may enhance patient safety and improve healthcare outcomes, but it presents notable ethical and legal dilemmas. Moreover, as AI streamlines the analysis of the multitude of factors relevant to malpractice claims, including informed consent, adherence to standards of care, and causation, the evaluation of professional liability might also benefit from its use. Beginning with an analysis of the basic steps in assessing professional liability, this article examines the potential new medical-legal issues that an expert witness may encounter when analyzing malpractice cases and the potential integration of AI in this context. These changes related to the use of integrated AI, will necessitate efforts on the part of judges, experts, and clinicians, and may require new legislative regulations. A new expert witness will be likely necessary in the evaluation of professional liability cases. On the one hand, artificial intelligence will support the expert witness; however, on the other hand, it will introduce specific elements into the activities of healthcare workers. These elements will necessitate an expert witness with a specialized cultural background. Examining the steps of professional liability assessment indicates that the likely path for AI in legal medicine involves its role as a collaborative and integrated tool. The combination of AI with human judgment in these assessments can enhance comprehensiveness and fairness. However, it is imperative to adopt a cautious and balanced approach to prevent complete automation in this field.
Collapse
Affiliation(s)
- Claudio Terranova
- Legal Medicine and Toxicology, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padua, Padua, Italy
| | | | | | | |
Collapse
|
23
|
Leo E, Stanzione A, Miele M, Cuocolo R, Sica G, Scaglione M, Camera L, Maurea S, Mainenti PP. Artificial Intelligence and Radiomics for Endometrial Cancer MRI: Exploring the Whats, Whys and Hows. J Clin Med 2023; 13:226. [PMID: 38202233 PMCID: PMC10779496 DOI: 10.3390/jcm13010226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 12/23/2023] [Accepted: 12/23/2023] [Indexed: 01/12/2024] Open
Abstract
Endometrial cancer (EC) is intricately linked to obesity and diabetes, which are widespread risk factors. Medical imaging, especially magnetic resonance imaging (MRI), plays a major role in EC assessment, particularly for disease staging. However, the diagnostic performance of MRI exhibits variability in the detection of clinically relevant prognostic factors (e.g., deep myometrial invasion and metastatic lymph nodes assessment). To address these challenges and enhance the value of MRI, radiomics and artificial intelligence (AI) algorithms emerge as promising tools with a potential to impact EC risk assessment, treatment planning, and prognosis prediction. These advanced post-processing techniques allow us to quantitatively analyse medical images, providing novel insights into cancer characteristics beyond conventional qualitative image evaluation. However, despite the growing interest and research efforts, the integration of radiomics and AI to EC management is still far from clinical practice and represents a possible perspective rather than an actual reality. This review focuses on the state of radiomics and AI in EC MRI, emphasizing risk stratification and prognostic factor prediction, aiming to illuminate potential advancements and address existing challenges in the field.
Collapse
Affiliation(s)
- Elisabetta Leo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy
| | - Mariaelena Miele
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, 84081 Baronissi, Italy
| | - Giacomo Sica
- Department of Radiology, Monaldi Hospital, Azienda Ospedaliera dei Colli, 80131 Naples, Italy
| | - Mariano Scaglione
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Luigi Camera
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy
| | - Pier Paolo Mainenti
- Institute of Biostructures and Bioimaging of the National Council of Research (CNR), 80131 Naples, Italy
| |
Collapse
|
24
|
Guo W, Lv C, Guo M, Zhao Q, Yin X, Zhang L. Innovative applications of artificial intelligence in zoonotic disease management. SCIENCE IN ONE HEALTH 2023; 2:100045. [PMID: 39077042 PMCID: PMC11262289 DOI: 10.1016/j.soh.2023.100045] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 10/22/2023] [Indexed: 07/31/2024]
Abstract
Zoonotic diseases, transmitted between humans and animals, pose a substantial threat to global public health. In recent years, artificial intelligence (AI) has emerged as a transformative tool in the fight against diseases. This comprehensive review discusses the innovative applications of AI in the management of zoonotic diseases, including disease prediction, early diagnosis, drug development, and future prospects. AI-driven predictive models leverage extensive datasets to predict disease outbreaks and transmission patterns, thereby facilitating proactive public health responses. Early diagnosis benefits from AI-powered diagnostic tools that expedite pathogen identification and containment. Furthermore, AI technologies have accelerated drug discovery by identifying potential drug targets and optimizing candidate drugs. This review addresses these advancements, while also examining the promising future of AI in zoonotic disease control. We emphasize the pivotal role of AI in revolutionizing our approach to managing zoonotic diseases and highlight its potential to safeguard the health of both humans and animals on a global scale.
Collapse
Affiliation(s)
- Wenqiang Guo
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Chenrui Lv
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Meng Guo
- College of Veterinary Medicine, Henan Agricultural University, Zhengzhou 450046, China
| | - Qiwei Zhao
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Xinyi Yin
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Li Zhang
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| |
Collapse
|
25
|
Naqa IE, Drukker K. AI in imaging and therapy: innovations, ethics, and impact - introductory editorial. Br J Radiol 2023; 96:20239004. [PMID: 38011226 PMCID: PMC10546442 DOI: 10.1259/bjr.20239004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023] Open
|