1
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024:10.1038/s41433-024-03085-2. [PMID: 38734746 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
2
|
Zhang Y, Li S, Wu W, Zhao Y, Han J, Tong C, Luo N, Zhang K. Machine-learning-based models to predict cardiovascular risk using oculomics and clinic variables in KNHANES. BioData Min 2024; 17:12. [PMID: 38644481 PMCID: PMC11034020 DOI: 10.1186/s13040-024-00363-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/09/2024] [Indexed: 04/23/2024] Open
Abstract
BACKGROUND Recent researches have found a strong correlation between the triglyceride-glucose (TyG) index or the atherogenic index of plasma (AIP) and cardiovascular disease (CVD) risk. However, there is a lack of research on non-invasive and rapid prediction of cardiovascular risk. We aimed to develop and validate a machine-learning model for predicting cardiovascular risk based on variables encompassing clinical questionnaires and oculomics. METHODS We collected data from the Korean National Health and Nutrition Examination Survey (KNHANES). The training dataset (80% from the year 2008 to 2011 KNHANES) was used for machine learning model development, with internal validation using the remaining 20%. An external validation dataset from the year 2012 assessed the model's predictive capacity for TyG-index or AIP in new cases. We included 32122 participants in the final dataset. Machine learning models used 25 algorithms were trained on oculomics measurements and clinical questionnaires to predict the range of TyG-index and AIP. The area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score were used to evaluate the performance of our machine learning models. RESULTS Based on large-scale cohort studies, we determined TyG-index cut-off points at 8.0, 8.75 (upper one-third values), 8.93 (upper one-fourth values), and AIP cut-offs at 0.318, 0.34. Values surpassing these thresholds indicated elevated cardiovascular risk. The best-performing algorithm revealed TyG-index cut-offs at 8.0, 8.75, and 8.93 with internal validation AUCs of 0.812, 0.873, and 0.911, respectively. External validation AUCs were 0.809, 0.863, and 0.901. For AIP at 0.34, internal and external validation achieved similar AUCs of 0.849 and 0.842. Slightly lower performance was seen for the 0.318 cut-off, with AUCs of 0.844 and 0.836. Significant gender-based variations were noted for TyG-index at 8 (male AUC=0.832, female AUC=0.790) and 8.75 (male AUC=0.874, female AUC=0.862) and AIP at 0.318 (male AUC=0.853, female AUC=0.825) and 0.34 (male AUC=0.858, female AUC=0.831). Gender similarity in AUC (male AUC=0.907 versus female AUC=0.906) was observed only when the TyG-index cut-off point equals 8.93. CONCLUSION We have established a simple and effective non-invasive machine learning model that has good clinical value for predicting cardiovascular risk in the general population.
Collapse
Affiliation(s)
- Yuqi Zhang
- School of Computer Science & Engineering, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Sijin Li
- Department of Cardiology, the Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen, China
- Department of Cardiology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Weijie Wu
- Department of Cardiology, the Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen, China
| | - Yanqing Zhao
- Department of Interventional Radiology & Vascular Surgery, Peking University Third Hospital, Beijing, China
| | - Jintao Han
- Department of Interventional Radiology & Vascular Surgery, Peking University Third Hospital, Beijing, China
| | - Chao Tong
- School of Computer Science & Engineering, Beihang University, Beijing, China.
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China.
| | - Niansang Luo
- Department of Cardiology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Kun Zhang
- Department of Cardiology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China.
| |
Collapse
|
3
|
Lang O, Yaya-Stupp D, Traynis I, Cole-Lewis H, Bennett CR, Lyles CR, Lau C, Irani M, Semturs C, Webster DR, Corrado GS, Hassidim A, Matias Y, Liu Y, Hammel N, Babenko B. Using generative AI to investigate medical imagery models and datasets. EBioMedicine 2024; 102:105075. [PMID: 38565004 PMCID: PMC10993140 DOI: 10.1016/j.ebiom.2024.105075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 04/04/2024] Open
Abstract
BACKGROUND AI models have shown promise in performing many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust of doctors in AI-based models, especially in domains where AI prediction capabilities surpass those of humans. Moreover, such explanations could enable novel scientific discovery by uncovering signals in the data that aren't yet known to experts. METHODS In this paper, we present a workflow for generating hypotheses to understand which visual signals in images are correlated with a classification model's predictions for a given task. This approach leverages an automatic visual explanation algorithm followed by interdisciplinary expert review. We propose the following 4 steps: (i) Train a classifier to perform a given task to assess whether the imagery indeed contains signals relevant to the task; (ii) Train a StyleGAN-based image generator with an architecture that enables guidance by the classifier ("StylEx"); (iii) Automatically detect, extract, and visualize the top visual attributes that the classifier is sensitive towards. For visualization, we independently modify each of these attributes to generate counterfactual visualizations for a set of images (i.e., what the image would look like with the attribute increased or decreased); (iv) Formulate hypotheses for the underlying mechanisms, to stimulate future research. Specifically, present the discovered attributes and corresponding counterfactual visualizations to an interdisciplinary panel of experts so that hypotheses can account for social and structural determinants of health (e.g., whether the attributes correspond to known patho-physiological or socio-cultural phenomena, or could be novel discoveries). FINDINGS To demonstrate the broad applicability of our approach, we present results on eight prediction tasks across three medical imaging modalities-retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples where many of the automatically-learned attributes clearly capture clinically known features (e.g., types of cataract, enlarged heart), and demonstrate automatically-learned confounders that arise from factors beyond physiological mechanisms (e.g., chest X-ray underexposure is correlated with the classifier predicting abnormality, and eye makeup is correlated with the classifier predicting low hemoglobin levels). We further show that our method reveals a number of physiologically plausible, previously-unknown attributes based on the literature (e.g., differences in the fundus associated with self-reported sex, which were previously unknown). INTERPRETATION Our approach enables hypotheses generation via attribute visualizations and has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models, as well as debug and design better datasets. Though not designed to infer causality, importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors, and hence interdisciplinary perspectives are critical in these investigations. Finally, we will release code to help researchers train their own StylEx models and analyze their predictive tasks of interest, and use the methodology presented in this paper for responsible interpretation of the revealed attributes. FUNDING Google.
Collapse
Affiliation(s)
| | | | - Ilana Traynis
- Work Done at Google Via Advanced Clinical, Deerfield, IL, USA
| | | | | | - Courtney R Lyles
- Google, Mountain View, CA, USA; University of California San Francisco, Department of Medicine, San Francisco, CA, USA
| | | | | | | | | | | | | | | | - Yun Liu
- Google, Mountain View, CA, USA
| | | | | |
Collapse
|
4
|
Tian T, Aaron RE, DuNova AY, Jendle JH, Kerr D, Cengiz E, Drincic A, Pickup JC, Chen KY, Schwartz N, Muchmore DB, Akturk HK, Levy CJ, Schmidt S, Bellazzi R, Wu AHB, Spanakis EK, Najafi B, Chase JG, Seley JJ, Klonoff DC. Diabetes Technology Meeting 2023. J Diabetes Sci Technol 2024:19322968241235205. [PMID: 38528741 DOI: 10.1177/19322968241235205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Diabetes Technology Society hosted its annual Diabetes Technology Meeting from November 1 to November 4, 2023. Meeting topics included digital health; metrics of glycemia; the integration of glucose and insulin data into the electronic health record; technologies for insulin pumps, blood glucose monitors, and continuous glucose monitors; diabetes drugs and analytes; skin physiology; regulation of diabetes devices and drugs; and data science, artificial intelligence, and machine learning. A live demonstration of a personalized carbohydrate dispenser for people with diabetes was presented.
Collapse
Affiliation(s)
- Tiffany Tian
- Diabetes Technology Society, Burlingame, CA, USA
| | | | | | - Johan H Jendle
- School of Medicine and Health, Institute of Medical Sciences, Örebro University, Örebro, Sweden
| | | | - Eda Cengiz
- School of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | | | - John C Pickup
- King's College London School of Medicine, London, UK
| | - Kong Y Chen
- National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, USA
| | | | | | - Halis K Akturk
- Barbara Davis Center for Diabetes, University of Colorado, Aurora, CO, USA
| | - Carol J Levy
- Division of Endocrinology, Diabetes, and Metabolism, Icahn School of Medicine at Mount Sinai, New York City, NY, USA
| | | | | | - Alan H B Wu
- University of California, San Francisco, San Francisco, CA, USA
| | - Elias K Spanakis
- Baltimore VA Medical Center and School of Medicine, University of Maryland, Baltimore, MD, USA
| | | | | | - Jane Jeffrie Seley
- Division of Endocrinology, Diabetes & Metabolism, Weill Cornell Medicine, New York City, NY, USA
| | - David C Klonoff
- Diabetes Research Institute, Mills-Peninsula Medical Center, San Mateo, CA, USA
| |
Collapse
|
5
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
6
|
Shickel B, Bihorac A. The dawn of multimodal artificial intelligence in nephrology. Nat Rev Nephrol 2024; 20:79-80. [PMID: 38097775 DOI: 10.1038/s41581-023-00799-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Affiliation(s)
- Benjamin Shickel
- Division of Nephrology, Hypertension and Renal Transplantation, Department of Medicine, University of Florida, Gainesville, FL, USA
- Intelligent Clinical Care Center (IC3), University of Florida, Gainesville, FL, USA
| | - Azra Bihorac
- Division of Nephrology, Hypertension and Renal Transplantation, Department of Medicine, University of Florida, Gainesville, FL, USA.
- Intelligent Clinical Care Center (IC3), University of Florida, Gainesville, FL, USA.
| |
Collapse
|
7
|
DeBuc DC. AI for identification of systemic biomarkers from external eye photos: a promising field in the oculomics revolution. Lancet Digit Health 2023; 5:e249-e250. [PMID: 36966119 DOI: 10.1016/s2589-7500(23)00047-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 02/22/2023] [Indexed: 03/27/2023]
Affiliation(s)
- Delia Cabrera DeBuc
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA.
| |
Collapse
|