1
|
Li Z, Yin S, Wang S, Wang Y, Qiang W, Jiang J. Transformative applications of oculomics-based AI approaches in the management of systemic diseases: A systematic review. J Adv Res 2024:S2090-1232(24)00537-X. [PMID: 39542135 DOI: 10.1016/j.jare.2024.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 11/10/2024] [Accepted: 11/11/2024] [Indexed: 11/17/2024] Open
Abstract
BACKGROUND Systemic diseases, such as cardiovascular and cerebrovascular conditions, pose significant global health challenges due to their high mortality rates. Early identification and intervention in systemic diseases can substantially enhance their prognosis. However, diagnosing systemic diseases often necessitates complex, expensive, and invasive tests, posing challenges in their timely detection. Therefore, simple, cost-effective, and non-invasive methods for the management (such as screening, diagnosis, and monitoring) of systemic diseases are needed to reduce associated comorbidities and mortality rates. AIM OF THE REVIEW This systematic review examines the application of artificial intelligence (AI) algorithms in managing systemic diseases by analyzing ophthalmic features (oculomics) obtained from convenient, affordable, and non-invasive ophthalmic imaging. KEY SCIENTIFIC CONCEPTS OF REVIEW Our analysis demonstrates the promising accuracy of AI in predicting systemic diseases. Subgroup analysis reveals promising capabilities of oculomics-based AI for disease staging, while caution is warranted due to the possible overestimation of AI capabilities in low-quality studies. These systems are cost-effective and safe, with high rates of acceptance among patients and clinicians. This review underscores the potential of oculomics-based AI approaches in revolutionizing the management of systemic diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Shiqi Yin
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Shihong Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Yangyang Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| |
Collapse
|
2
|
Liao W, Voldman J. Learning and diSentangling patient static information from time-series Electronic hEalth Records (STEER). PLOS DIGITAL HEALTH 2024; 3:e0000640. [PMID: 39432484 PMCID: PMC11493250 DOI: 10.1371/journal.pdig.0000640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 09/11/2024] [Indexed: 10/23/2024]
Abstract
Recent work in machine learning for healthcare has raised concerns about patient privacy and algorithmic fairness. Previous work has shown that self-reported race can be predicted from medical data that does not explicitly contain racial information. However, the extent of data identification is unknown, and we lack ways to develop models whose outcomes are minimally affected by such information. Here we systematically investigated the ability of time-series electronic health record data to predict patient static information. We found that not only the raw time-series data, but also learned representations from machine learning models, can be trained to predict a variety of static information with area under the receiver operating characteristic curve as high as 0.851 for biological sex, 0.869 for binarized age and 0.810 for self-reported race. Such high predictive performance can be extended to various comorbidity factors and exists even when the model was trained for different tasks, using different cohorts, using different model architectures and databases. Given the privacy and fairness concerns these findings pose, we develop a variational autoencoder-based approach that learns a structured latent space to disentangle patient-sensitive attributes from time-series data. Our work thoroughly investigates the ability of machine learning models to encode patient static information from time-series electronic health records and introduces a general approach to protect patient-sensitive information for downstream tasks.
Collapse
Affiliation(s)
- Wei Liao
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Joel Voldman
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
3
|
Ghenciu LA, Dima M, Stoicescu ER, Iacob R, Boru C, Hațegan OA. Retinal Imaging-Based Oculomics: Artificial Intelligence as a Tool in the Diagnosis of Cardiovascular and Metabolic Diseases. Biomedicines 2024; 12:2150. [PMID: 39335664 PMCID: PMC11430496 DOI: 10.3390/biomedicines12092150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 09/19/2024] [Accepted: 09/21/2024] [Indexed: 09/30/2024] Open
Abstract
Cardiovascular diseases (CVDs) are a major cause of mortality globally, emphasizing the need for early detection and effective risk assessment to improve patient outcomes. Advances in oculomics, which utilize the relationship between retinal microvascular changes and systemic vascular health, offer a promising non-invasive approach to assessing CVD risk. Retinal fundus imaging and optical coherence tomography/angiography (OCT/OCTA) provides critical information for early diagnosis, with retinal vascular parameters such as vessel caliber, tortuosity, and branching patterns identified as key biomarkers. Given the large volume of data generated during routine eye exams, there is a growing need for automated tools to aid in diagnosis and risk prediction. The study demonstrates that AI-driven analysis of retinal images can accurately predict cardiovascular risk factors, cardiovascular events, and metabolic diseases, surpassing traditional diagnostic methods in some cases. These models achieved area under the curve (AUC) values ranging from 0.71 to 0.87, sensitivity between 71% and 89%, and specificity between 40% and 70%, surpassing traditional diagnostic methods in some cases. This approach highlights the potential of retinal imaging as a key component in personalized medicine, enabling more precise risk assessment and earlier intervention. It not only aids in detecting vascular abnormalities that may precede cardiovascular events but also offers a scalable, non-invasive, and cost-effective solution for widespread screening. However, the article also emphasizes the need for further research to standardize imaging protocols and validate the clinical utility of these biomarkers across different populations. By integrating oculomics into routine clinical practice, healthcare providers could significantly enhance early detection and management of systemic diseases, ultimately improving patient outcomes. Fundus image analysis thus represents a valuable tool in the future of precision medicine and cardiovascular health management.
Collapse
Affiliation(s)
- Laura Andreea Ghenciu
- Department of Functional Sciences, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Center for Translational Research and Systems Medicine, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Mirabela Dima
- Department of Neonatology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Emil Robert Stoicescu
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Department of Radiology and Medical Imaging, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Research Center for Pharmaco-Toxicological Evaluations, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Roxana Iacob
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
- Department of Anatomy and Embriology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Casiana Boru
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| | - Ovidiu Alin Hațegan
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| |
Collapse
|
4
|
An S, Squirrell D. Validation of neuron activation patterns for artificial intelligence models in oculomics. Sci Rep 2024; 14:20940. [PMID: 39251780 PMCID: PMC11383926 DOI: 10.1038/s41598-024-71517-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 08/28/2024] [Indexed: 09/11/2024] Open
Abstract
Recent advancements in artificial intelligence (AI) have prompted researchers to expand into the field of oculomics; the association between the retina and systemic health. Unlike conventional AI models trained on well-recognized retinal features, the retinal phenotypes that most oculomics models use are more subtle. Consequently, applying conventional tools, such as saliency maps, to understand how oculomics models arrive at their inference is problematic and open to bias. We hypothesized that neuron activation patterns (NAPs) could be an alternative way to interpret oculomics models, but currently, most existing implementations focus on failure diagnosis. In this study, we designed a novel NAP framework to interpret an oculomics model. We then applied our framework to an AI model predicting systolic blood pressure from fundus images in the United Kingdom Biobank dataset. We found that the NAP generated from our framework was correlated to the clinically relevant endpoint of cardiovascular risk. Our NAP was also able to discern two biologically distinct groups among participants who were assigned the same predicted systolic blood pressure. These results demonstrate the feasibility of our proposed NAP framework for gaining deeper insights into the functioning of oculomics models. Further work is required to validate these results on external datasets.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, The University of Auckland, 85 Park Rd, Grafton, Auckland, 1023, New Zealand.
- Toku Eyes Limited NZ, Auckland, New Zealand.
| | | |
Collapse
|
5
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
6
|
Arora A, Lawton T. Artificial intelligence in the NHS: Moving from ideation to implementation. Future Healthc J 2024; 11:100183. [PMID: 39371532 PMCID: PMC11452829 DOI: 10.1016/j.fhj.2024.100183] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 08/30/2024] [Indexed: 10/08/2024]
Affiliation(s)
- Anmol Arora
- School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
- Department of Oncology, University College London, London WC1E 6DD, UK
| | - Tom Lawton
- Department of Computer Science, University of York, York YO10 5GH, UK
- Improvement Academy, Bradford Teaching Hospitals NHS Foundation Trust, Bradford BD9 6RJ, UK
| |
Collapse
|
7
|
Rom Y, Aviv R, Cohen GY, Friedman YE, Ianchulev T, Dvey-Aharon Z. Diabetes detection from non-diabetic retinopathy fundus images using deep learning methodology. Heliyon 2024; 10:e36592. [PMID: 39258195 PMCID: PMC11386038 DOI: 10.1016/j.heliyon.2024.e36592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 08/15/2024] [Accepted: 08/19/2024] [Indexed: 09/12/2024] Open
Abstract
Diabetes is one of the leading causes of morbidity and mortality in the United States and worldwide. Traditionally, diabetes detection from retinal images has been performed only using relevant retinopathy indications. This research aimed to develop an artificial intelligence (AI) machine learning model which can detect the presence of diabetes from fundus imagery of eyes without any diabetic eye disease. A machine learning algorithm was trained on the EyePACS dataset, consisting of 47,076 images. Patients were also divided into cohorts based on disease duration, each cohort consisting of patients diagnosed within the timeframe in question (e.g., 15 years) and healthy participants. The algorithm achieved 0.86 area under receiver operating curve (AUC) in detecting diabetes per patient visit when averaged across camera models, and AUC 0.83 on the task of detecting diabetes per image. The results suggest that diabetes may be diagnosed non-invasively using fundus imagery alone. This may enable diabetes diagnosis at point of care, as well as other, accessible venues, facilitating the diagnosis of many undiagnosed people with diabetes.
Collapse
Affiliation(s)
- Yovel Rom
- AEYE Health Inc., New York City, NY, USA
| | | | - Gal Yaakov Cohen
- The Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Yehudit Eden Friedman
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Division of Endocrinology, Diabetes and Metabolism, Sheba Medical Center, Ramat Gan, Israel
| | - Tsontcho Ianchulev
- AEYE Health Inc., New York City, NY, USA
- New York Eye and Ear of Mount Sinai, Icahn School of Medicine, NY, USA
| | | |
Collapse
|
8
|
Lotter W. Acquisition parameters influence AI recognition of race in chest x-rays and mitigating these factors reduces underdiagnosis bias. Nat Commun 2024; 15:7465. [PMID: 39198519 PMCID: PMC11358468 DOI: 10.1038/s41467-024-52003-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/22/2024] [Indexed: 09/01/2024] Open
Abstract
A core motivation for the use of artificial intelligence (AI) in medicine is to reduce existing healthcare disparities. Yet, recent studies have demonstrated two distinct findings: (1) AI models can show performance biases in underserved populations, and (2) these same models can be directly trained to recognize patient demographics, such as predicting self-reported race from medical images alone. Here, we investigate how these findings may be related, with an end goal of reducing a previously identified underdiagnosis bias. Using two popular chest x-ray datasets, we first demonstrate that technical parameters related to image acquisition and processing influence AI models trained to predict patient race, where these results partly reflect underlying biases in the original clinical datasets. We then find that mitigating the observed differences through a demographics-independent calibration strategy reduces the previously identified bias. While many factors likely contribute to AI bias and demographics prediction, these results highlight the importance of carefully considering data acquisition and processing parameters in AI development and healthcare equity more broadly.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Brigham & Women's Hospital, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
9
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
10
|
Xu Y, Liu H, Sun R, Wang H, Huo Y, Wang N, Hu M. Deep learning for predicting circular retinal nerve fiber layer thickness from fundus photographs and diagnosing glaucoma. Heliyon 2024; 10:e33813. [PMID: 39040392 PMCID: PMC11261845 DOI: 10.1016/j.heliyon.2024.e33813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/24/2024] Open
Abstract
Purpose This study aimed to propose a new deep learning (DL) approach to automatically predict the retinal nerve fiber layer thickness (RNFLT) around optic disc regions in fundus photography trained by optical coherence tomography (OCT) and diagnose glaucoma based on the predicted comprehensive information about RNFLT. Methods A total of 1403 pairs of fundus photographs and OCT RNFLT scans from 1403 eyes of 1196 participants were included. A residual deep neural network was trained to predict the RNFLT for each local image in a fundus photograph, and then a RNFLT report was generated based on the local images. Two indicators were designed based on the generated report. The support vector machines (SVM) algorithm was used to diagnose glaucoma based on the two indicators. Results A strong correlation was found between the predicted and actual RNFLT values on local images. On three testing datasets, we found the Pearson r to be 0.893, 0.850, and 0.831, respectively, and the mean absolute error of the prediction to be 14.345, 17.780, and 19.250 μm, respectively. The area under the receiver operating characteristic curves for discriminating glaucomatous from healthy eyes was 0.860 (95 % confidence interval, 0.799-0.921). Conclusions We established a novel local image-based DL approach to provide comprehensive quantitative information on RNFLT in fundus photographs, which was used to diagnose glaucoma. In addition, training a deep neural network based on local images to predict objective detail information in fundus photographs provided a new paradigm for the diagnosis of ophthalmic diseases.
Collapse
Affiliation(s)
- Yongli Xu
- College of Statistics and Data Science, Beijing University of Technology, Beijing, China
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Run Sun
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
| | - Yanjiao Huo
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University & Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, China
| |
Collapse
|
11
|
Carrillo-Larco RM. Recognition of Patient Gender: A Machine Learning Preliminary Analysis Using Heart Sounds from Children and Adolescents. Pediatr Cardiol 2024:10.1007/s00246-024-03561-2. [PMID: 38937337 DOI: 10.1007/s00246-024-03561-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/19/2024] [Indexed: 06/29/2024]
Abstract
Research has shown that X-rays and fundus images can classify gender, age group, and race, raising concerns about bias and fairness in medical AI applications. However, the potential for physiological sounds to classify sociodemographic traits has not been investigated. Exploring this gap is crucial for understanding the implications and ensuring fairness in the field of medical sound analysis. We aimed to develop classifiers to determine gender (men/women) based on heart sound recordings and using machine learning (ML). Data-driven ML analysis. We utilized the open-access CirCor DigiScope Phonocardiogram Dataset obtained from cardiac screening programs in Brazil. Volunteers < 21 years of age. Each participant completed a questionnaire and underwent a clinical examination, including electronic auscultation at four cardiac points: aortic (AV), mitral (MV), pulmonary (PV), and tricuspid (TV). We used Mel-frequency cepstral coefficients (MFCCs) to develop the ML classifiers. From each patient and from each auscultation sound recording, we extracted 10 MFCCs. In sensitivity analysis, we additionally extracted 20, 30, 40, and 50 MFCCs. The most effective gender classifier was developed using PV recordings (AUC ROC = 70.3%). The second best came from MV recordings (AUC ROC = 58.8%). AV and TV recordings produced classifiers with an AUC ROC of 56.4% and 56.1%, respectively. Using more MFCCs did not substantially improve the classifiers. It is possible to classify between males and females using phonocardiogram data. As health-related audio recordings become more prominent in ML applications, research is required to explore if these recordings contain signals that could distinguish sociodemographic features.
Collapse
Affiliation(s)
- Rodrigo M Carrillo-Larco
- Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, USA.
| |
Collapse
|
12
|
Patterson EJ, Bounds AD, Wagner SK, Kadri-Langford R, Taylor R, Daly D. Oculomics: A Crusade Against the Four Horsemen of Chronic Disease. Ophthalmol Ther 2024; 13:1427-1451. [PMID: 38630354 PMCID: PMC11109082 DOI: 10.1007/s40123-024-00942-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 03/25/2024] [Indexed: 05/22/2024] Open
Abstract
Chronic, non-communicable diseases present a major barrier to living a long and healthy life. In many cases, early diagnosis can facilitate prevention, monitoring, and treatment efforts, improving patient outcomes. There is therefore a critical need to make screening techniques as accessible, unintimidating, and cost-effective as possible. The association between ocular biomarkers and systemic health and disease (oculomics) presents an attractive opportunity for detection of systemic diseases, as ophthalmic techniques are often relatively low-cost, fast, and non-invasive. In this review, we highlight the key associations between structural biomarkers in the eye and the four globally leading causes of morbidity and mortality: cardiovascular disease, cancer, neurodegenerative disease, and metabolic disease. We observe that neurodegenerative disease is a particularly promising target for oculomics, with biomarkers detected in multiple ocular structures. Cardiovascular disease biomarkers are present in the choroid, retinal vasculature, and retinal nerve fiber layer, and metabolic disease biomarkers are present in the eyelid, tear fluid, lens, and retinal vasculature. In contrast, only the tear fluid emerged as a promising ocular target for the detection of cancer. The retina is a rich source of oculomics data, the analysis of which has been enhanced by artificial intelligence-based tools. Although not all biomarkers are disease-specific, limiting their current diagnostic utility, future oculomics research will likely benefit from combining data from various structures to improve specificity, as well as active design, development, and optimization of instruments that target specific disease signatures, thus facilitating differential diagnoses.
Collapse
Affiliation(s)
| | | | - Siegfried K Wagner
- Moorfields Eye Hospital NHS Trust, 162 City Road, London, EC1V 2PD, UK
- UCL Institute of Ophthalmology, University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | | | - Robin Taylor
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| | - Dan Daly
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| |
Collapse
|
13
|
Saeed A, Hadoux X, van Wijngaarden P. Hyperspectral retinal imaging biomarkers of ocular and systemic diseases. Eye (Lond) 2024:10.1038/s41433-024-03135-9. [PMID: 38778136 DOI: 10.1038/s41433-024-03135-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/20/2024] [Accepted: 05/07/2024] [Indexed: 05/25/2024] Open
Abstract
Hyperspectral imaging is a frontier in the field of medical imaging technology. It enables the simultaneous collection of spectroscopic and spatial data. Structural and physiological information encoded in these data can be used to identify and localise typically elusive biomarkers. Studies of retinal hyperspectral imaging have provided novel insights into disease pathophysiology and new ways of non-invasive diagnosis and monitoring of retinal and systemic diseases. This review provides a concise overview of recent advances in retinal hyperspectral imaging.
Collapse
Affiliation(s)
- Abera Saeed
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, 3002, VIC, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, 3002, VIC, Australia
| | - Xavier Hadoux
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, 3002, VIC, Australia
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, 3002, VIC, Australia.
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, 3002, VIC, Australia.
| |
Collapse
|
14
|
Vaghefi E, Squirrell D, Yang S, An S, Xie L, Durbin MK, Hou H, Marshall J, Shreibati J, McConnell MV, Budoff M. Development and validation of a deep-learning model to predict 10-year atherosclerotic cardiovascular disease risk from retinal images using the UK Biobank and EyePACS 10K datasets. CARDIOVASCULAR DIGITAL HEALTH JOURNAL 2024; 5:59-69. [PMID: 38765618 PMCID: PMC11096659 DOI: 10.1016/j.cvdhj.2023.12.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024] Open
Abstract
Background Atherosclerotic cardiovascular disease (ASCVD) is a leading cause of death globally, and early detection of high-risk individuals is essential for initiating timely interventions. The authors aimed to develop and validate a deep learning (DL) model to predict an individual's elevated 10-year ASCVD risk score based on retinal images and limited demographic data. Methods The study used 89,894 retinal fundus images from 44,176 UK Biobank participants (96% non-Hispanic White, 5% diabetic) to train and test the DL model. The DL model was developed using retinal images plus age, race/ethnicity, and sex at birth to predict an individual's 10-year ASCVD risk score using the pooled cohort equation (PCE) as the ground truth. This model was then tested on the US EyePACS 10K dataset (5.8% non-Hispanic White, 99.9% diabetic), composed of 18,900 images from 8969 diabetic individuals. Elevated ASCVD risk was defined as a PCE score of ≥7.5%. Results In the UK Biobank internal validation dataset, the DL model achieved an area under the receiver operating characteristic curve of 0.89, sensitivity 84%, and specificity 90%, for detecting individuals with elevated ASCVD risk scores. In the EyePACS 10K and with the addition of a regression-derived diabetes modifier, it achieved sensitivity 94%, specificity 72%, mean error -0.2%, and mean absolute error 3.1%. Conclusion This study demonstrates that DL models using retinal images can provide an additional approach to estimating ASCVD risk, as well as the value of applying DL models to different external datasets and opportunities about ASCVD risk assessment in patients living with diabetes.
Collapse
Affiliation(s)
| | | | | | | | - Li Xie
- Toku Eyes, Auckland, New Zealand
| | | | | | - John Marshall
- Institute of Ophthalmology, University College of London, London, United Kingdom
| | | | - Michael V. McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, California
| | - Matthew Budoff
- Department of Medicine, Lundquist Institute at Harbor-UCLA Medical Center, Torrance, California
| |
Collapse
|
15
|
O'Connor MI. Equity360: Gender, Race, and Ethnicity-The Power of AI to Improve or Worsen Health Disparities. Clin Orthop Relat Res 2024; 482:591-594. [PMID: 38289705 PMCID: PMC10937006 DOI: 10.1097/corr.0000000000002986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 01/04/2024] [Indexed: 02/01/2024]
Affiliation(s)
- Mary I O'Connor
- Co-founder and Chief Medical Officer, Vori Health, Jacksonville, FL, USA
| |
Collapse
|
16
|
Hong J, Yoon S, Shim KW, Park YR. Screening of Moyamoya Disease From Retinal Photographs: Development and Validation of Deep Learning Algorithms. Stroke 2024; 55:715-724. [PMID: 38258570 PMCID: PMC10896198 DOI: 10.1161/strokeaha.123.044026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 12/13/2023] [Indexed: 01/24/2024]
Abstract
BACKGROUND Moyamoya disease (MMD) is a rare and complex pathological condition characterized by an abnormal collateral circulation network in the basal brain. The diagnosis of MMD and its progression is unpredictable and influenced by many factors. MMD can affect the blood vessels supplying the eyes, resulting in a range of ocular symptoms. In this study, we developed a deep learning model using real-world data to assist a diagnosis and determine the stage of the disease using retinal photographs. METHODS This retrospective observational study conducted from August 2006 to March 2022 included 498 retinal photographs from 78 patients with MMD and 3835 photographs from 1649 healthy participants. Photographs were preprocessed, and an ResNeXt50 model was developed. Model performance was measured using receiver operating curves and their area under the receiver operating characteristic curve, accuracy, sensitivity, and F1-score. Heatmaps and progressive erasing plus progressive restoration were performed to validate the faithfulness. RESULTS Overall, 322 retinal photographs from 67 patients with MMD and 3752 retinal photographs from 1616 healthy participants were used to develop a screening and stage prediction model for MMD. The average age of the patients with MMD was 44.1 years, and the average follow-up time was 115 months. Stage 3 photographs were the most prevalent, followed by stages 4, 5, 2, 1, and 6 and healthy. The MMD screening model had an average area under the receiver operating characteristic curve of 94.6%, with 89.8% sensitivity and 90.4% specificity at the best cutoff point. MMD stage prediction models had an area under the receiver operating characteristic curve of 78% or higher, with stage 3 performing the best at 93.6%. Heatmap identified the vascular region of the fundus as important for prediction, and progressive erasing plus progressive restoration result shows an area under the receiver operating characteristic curve of 70% only with 50% of the important regions. CONCLUSIONS This study demonstrated that retinal photographs could be used as potential biomarkers for screening and staging of MMD and the disease stage could be classified by a deep learning algorithm.
Collapse
Affiliation(s)
- JaeSeong Hong
- Department of Biomedical Systems Informatics (J.H., Y.R.P.), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sangchul Yoon
- Department of Medical Humanities and Social Sciences (S.Y.), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kyu Won Shim
- Department of Neurosurgery (K.W.S.), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yu Rang Park
- Department of Biomedical Systems Informatics (J.H., Y.R.P.), Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
17
|
Tan Y, Ma Y, Rao S, Sun X. Performance of deep learning for detection of chronic kidney disease from retinal fundus photographs: A systematic review and meta-analysis. Eur J Ophthalmol 2024; 34:502-509. [PMID: 37671422 DOI: 10.1177/11206721231199848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
OBJECTIVE Deep learning has been used to detect chronic kidney disease (CKD) from retinal fundus photographs. We aim to evaluate the performance of deep learning for CKD detection. METHODS The original studies in CKD patients detected by deep learning from retinal fundus photographs were eligible for inclusion. PubMed, Embase, the Cochrane Library, and Web of Science were searched up to October 31, 2022. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used to assess the risk of bias. RESULTS Four studies enrolled 114,860 subjects were included. The pooled sensitivity and specificity were 87.8% (95% confidence interval (CI): 61.6% to 98.3%), and 62.4% (95% CI: 44.9% to 78.7%). The area under the curve (AUC) was 0.864 (95%CI: 0.769, 0.986). CONCLUSION Deep learning based on retinal fundus photographs has the ability to detect CKD, but it currently has a lot of room for improvement. It is still a long way from clinical application.
Collapse
Affiliation(s)
- Yuhe Tan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yunxi Ma
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Suyun Rao
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xufang Sun
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
18
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
19
|
Lee CJ, Rim TH, Kang HG, Yi JK, Lee G, Yu M, Park SH, Hwang JT, Tham YC, Wong TY, Cheng CY, Kim DW, Kim SS, Park S. Pivotal trial of a deep-learning-based retinal biomarker (Reti-CVD) in the prediction of cardiovascular disease: data from CMERC-HI. J Am Med Inform Assoc 2023; 31:130-138. [PMID: 37847669 PMCID: PMC10746299 DOI: 10.1093/jamia/ocad199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/31/2023] [Accepted: 09/29/2023] [Indexed: 10/19/2023] Open
Abstract
OBJECTIVE The potential of using retinal images as a biomarker of cardiovascular disease (CVD) risk has gained significant attention, but regulatory approval of such artificial intelligence (AI) algorithms is lacking. In this regulated pivotal trial, we validated the efficacy of Reti-CVD, an AI-Software as a Medical Device (AI-SaMD), that utilizes retinal images to stratify CVD risk. MATERIALS AND METHODS In this retrospective study, we used data from the Cardiovascular and Metabolic Diseases Etiology Research Center-High Risk (CMERC-HI) Cohort. Cox proportional hazard model was used to estimate hazard ratio (HR) trend across the 3-tier CVD risk groups (low-, moderate-, and high-risk) according to Reti-CVD in prediction of CVD events. The cardiac computed tomography-measured coronary artery calcium (CAC), carotid intima-media thickness (CIMT), and brachial-ankle pulse wave velocity (baPWV) were compared to Reti-CVD. RESULTS A total of 1106 participants were included, with 33 (3.0%) participants experiencing CVD events over 5 years; the Reti-CVD-defined risk groups (low, moderate, and high) were significantly associated with increased CVD risk (HR trend, 2.02; 95% CI, 1.26-3.24). When all variables of Reti-CVD, CAC, CIMT, baPWV, and other traditional risk factors were incorporated into one Cox model, the Reti-CVD risk groups were only significantly associated with increased CVD risk (HR = 2.40 [0.82-7.03] in moderate risk and HR = 3.56 [1.34-9.51] in high risk using low-risk as a reference). DISCUSSION This regulated pivotal study validated an AI-SaMD, retinal image-based, personalized CVD risk scoring system (Reti-CVD). CONCLUSION These results led the Korean regulatory body to authorize Reti-CVD.
Collapse
Affiliation(s)
- Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul 03722, South Korea
| | - Tyler Hyungtaek Rim
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Mediwhale Inc, Seoul 08378, South Korea
| | - Hyun Goo Kang
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul 03722, South Korea
| | - Joseph Keunhong Yi
- Department of Ophthalmology and Visual Science, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | | | - Marco Yu
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Soo-Hyun Park
- Food Functionality Research Division, Korea Food Research Institute, Wanju 55365, South Korea
| | - Jin-Taek Hwang
- Food Functionality Research Division, Korea Food Research Institute, Wanju 55365, South Korea
- Department of Food Biotechnology, University of Science and Technology, Daejeon 34113, South Korea
| | - Yih-Chung Tham
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Centre for Innovation and Precision Eye Health, and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
| | - Tien Yin Wong
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing 100084, China
| | - Ching-Yu Cheng
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Centre for Innovation and Precision Eye Health, and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
| | - Dong Wook Kim
- Department of Information and Statistics, Department of Bio & Medical Big Data, Research Institution of National Science (RINS), Gyeongsang National University, Jinju 52828, South Korea
| | - Sung Soo Kim
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul 03722, South Korea
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul 03722, South Korea
| |
Collapse
|
20
|
An S, Vaghefi E, Yang S, Xie L, Squirrell D. Examination of alternative eGFR definitions on the performance of deep learning models for detection of chronic kidney disease from fundus photographs. PLoS One 2023; 18:e0295073. [PMID: 38032977 PMCID: PMC10688656 DOI: 10.1371/journal.pone.0295073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023] Open
Abstract
Deep learning (DL) models have shown promise in detecting chronic kidney disease (CKD) from fundus photographs. However, previous studies have utilized a serum creatinine-only estimated glomerular rate (eGFR) equation to measure kidney function despite the development of more up-to-date methods. In this study, we developed two sets of DL models using fundus images from the UK Biobank to ascertain the effects of using a creatinine and cystatin-C eGFR equation over the baseline creatinine-only eGFR equation on fundus image-based DL CKD predictors. Our results show that a creatinine and cystatin-C eGFR significantly improved classification performance over the baseline creatinine-only eGFR when the models were evaluated conventionally. However, these differences were no longer significant when the models were assessed on clinical labels based on ICD10. Furthermore, we also observed variations in model performance and systemic condition incidence between our study and the ones conducted previously. We hypothesize that limitations in existing eGFR equations and the paucity of retinal features uniquely indicative of CKD may contribute to these inconsistencies. These findings emphasize the need for developing more transparent models to facilitate a better understanding of the mechanisms underpinning the ability of DL models to detect CKD from fundus images.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
- Toku Eyes Limited NZ, Auckland, New Zealand
| | - Ehsan Vaghefi
- School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
- Toku Eyes Limited NZ, Auckland, New Zealand
| | - Song Yang
- Toku Eyes Limited NZ, Auckland, New Zealand
| | - Li Xie
- Toku Eyes Limited NZ, Auckland, New Zealand
| | - David Squirrell
- Toku Eyes Limited NZ, Auckland, New Zealand
- Auckland District Health Board, Auckland, New Zealand
| |
Collapse
|
21
|
Prabhune A, Bhat S, Mallavaram A, Mehar Shagufta A, Srinivasan S. A Situational Analysis of the Impact of the COVID-19 Pandemic on Digital Health Research Initiatives in South Asia. Cureus 2023; 15:e48977. [PMID: 38111408 PMCID: PMC10726017 DOI: 10.7759/cureus.48977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2023] [Indexed: 12/20/2023] Open
Abstract
The objective of this paper was to evaluate and compare the quantity and sustainability of digital health initiatives in the South Asia region before and during the COVID-19 pandemic. The study used a two-step methodology of (a) descriptive analysis of digital health research articles published from 2016 to 2021 from South Asia in terms of stratification of research articles based on diseases and conditions they were developed, geography, and tasks wherein the initiative was applied and (b) a simple and replicable tool developed by authors to assess the sustainability of digital health initiatives using experimental or observational study designs. The results of the descriptive analysis highlight the following: (a) there was a 40% increase in the number of studies reported in 2020 when compared to 2019; (b) the three most common areas wherein substantive digital health research has been focused are health systems strengthening, ophthalmic disorders, and COVID-19; and (c) remote consultation, health information delivery, and clinical decision support systems are the top three commonly developed tools. We developed and estimated the inter-rater operability of the sustainability assessment tool ascertained with a Kappa value of 0.806 (±0.088). We conclude that the COVID-19 pandemic has had a positive impact on digital health research with an improvement in the number of digital health initiatives and an improvement in the sustainability score of studies published during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Akash Prabhune
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| | - Sachin Bhat
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| | | | | | - Surya Srinivasan
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| |
Collapse
|
22
|
Wu J, Duan C, Yang Y, Wang Z, Tan C, Han C, Hou X. Insights into the liver-eyes connections, from epidemiological, mechanical studies to clinical translation. J Transl Med 2023; 21:712. [PMID: 37817192 PMCID: PMC10566185 DOI: 10.1186/s12967-023-04543-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 09/19/2023] [Indexed: 10/12/2023] Open
Abstract
Maintenance of internal homeostasis is a sophisticated process, during which almost all organs get involved. Liver plays a central role in metabolism and involves in endocrine, immunity, detoxification and storage, and therefore it communicates with distant organs through such mechanisms to regulate pathophysiological processes. Dysfunctional liver is often accompanied by pathological phenotypes of distant organs, including the eyes. Many reviews have focused on crosstalk between the liver and gut, the liver and brain, the liver and heart, the liver and kidney, but with no attention paid to the liver and eyes. In this review, we summarized intimate connections between the liver and the eyes from three aspects. Epidemiologically, we suggest liver-related, potential, protective and risk factors for typical eye disease as well as eye indicators connected with liver status. For molecular mechanism aspect, we elaborate their inter-organ crosstalk from metabolism (glucose, lipid, proteins, vitamin, and mineral), detoxification (ammonia and bilirubin), and immunity (complement and inflammation regulation) aspect. In clinical application part, we emphasize the latest advances in utilizing the liver-eye axis in disease diagnosis and therapy, involving artificial intelligence-deep learning-based novel diagnostic tools for detecting liver disease and adeno-associated viral vector-based gene therapy method for curing blinding eye disease. We aim to focus on and provide novel insights into liver and eyes communications and help resolve existed clinically significant issues.
Collapse
Affiliation(s)
- Junhao Wu
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| | - Caihan Duan
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| | - Yuanfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Zhe Wang
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| | - Chen Tan
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| | - Chaoqun Han
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| | - Xiaohua Hou
- Division of Gastroenterology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan, 430022 Hubei China
| |
Collapse
|
23
|
Yoon WT, Song SJ, Shin J. Deep Learning, the Retina, and Parkinson Disease-Reply. JAMA Ophthalmol 2023; 141:912. [PMID: 37440221 DOI: 10.1001/jamaophthalmol.2023.2921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023]
Affiliation(s)
- Won Tae Yoon
- Department of Neurology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jitae Shin
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
24
|
Delavari P, Ozturan G, Yuan L, Yilmaz Ö, Oruc I. Artificial intelligence, explainability, and the scientific method: A proof-of-concept study on novel retinal biomarker discovery. PNAS NEXUS 2023; 2:pgad290. [PMID: 37746328 PMCID: PMC10517742 DOI: 10.1093/pnasnexus/pgad290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 08/28/2023] [Indexed: 09/26/2023]
Abstract
We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (N = 26 ) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (N = 31 ), after training, their performance increased significantly (p < 0.001 , d = 2.63 ). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.
Collapse
Affiliation(s)
- Parsa Delavari
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| | - Gulcenur Ozturan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Lei Yuan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Özgür Yilmaz
- Mathematics, University of British Columbia, Vancouver, V6T 1Z2 BC, Canada
| | - Ipek Oruc
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| |
Collapse
|
25
|
Warwick AN, Curran K, Hamill B, Stuart K, Khawaja AP, Foster PJ, Lotery AJ, Quinn M, Madhusudhan S, Balaskas K, Peto T. UK Biobank retinal imaging grading: methodology, baseline characteristics and findings for common ocular diseases. Eye (Lond) 2023; 37:2109-2116. [PMID: 36329166 PMCID: PMC10333328 DOI: 10.1038/s41433-022-02298-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Revised: 09/26/2022] [Accepted: 10/18/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND/OBJECTIVES This study aims to describe the grading methods and baseline characteristics for UK Biobank (UKBB) participants who underwent retinal imaging in 2009-2010, and to characterise individuals with retinal features suggestive of age-related macular degeneration (AMD), glaucoma and retinopathy. METHODS Non-mydriatic colour fundus photographs and macular optical coherence tomography (OCT) scans were manually graded by Central Administrative Research Facility certified graders and quality assured by clinicians of the Network of Ophthalmic Reading Centres UK. Captured retinal features included those associated with AMD (≥1 drusen, pigmentary changes, geographic atrophy or exudative AMD; either imaging modality), glaucoma (≥0.7 cup-disc ratio, ≥0.2 cup-disc ratio difference between eyes, other abnormal disc features; photographs only) and retinopathy (characteristic features of diabetic retinopathy with or without microaneurysms; either imaging modality). Suspected cases of these conditions were characterised with reference to diagnostic records, physical and biochemical measurements. RESULTS Among 68,514 UKBB participants who underwent retinal imaging, the mean age was 57.3 years (standard deviation 8.2), 45.7% were men and 90.6% were of White ethnicity. A total of 64,367 participants had gradable colour fundus photographs and 68,281 had gradable OCT scans in at least one eye. Retinal features suggestive of AMD and glaucoma were identified in 15,176 and 2184 participants, of whom 125 (0.8%) and 188 (8.6%), respectively, had a recorded diagnosis. Of 264 participants identified to have retinopathy with microaneurysms, 251 (95.1%) had either diabetes or hypertension. CONCLUSIONS This dataset represents a valuable addition to what is currently available in UKBB, providing important insights to both ocular and systemic health.
Collapse
Affiliation(s)
- Alasdair N Warwick
- Institute of Cardiovascular Science, University College London, London, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Katie Curran
- Centre for Public Health, Queen's University Belfast, Faculty of Medicine Health and Life Sciences, Belfast, UK
| | - Barbra Hamill
- Centre for Public Health, Queen's University Belfast, Faculty of Medicine Health and Life Sciences, Belfast, UK
| | - Kelsey Stuart
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Anthony P Khawaja
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Paul J Foster
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Andrew J Lotery
- Faculty of Medicine, Clinical and Experimental Sciences, University of Southampton, Southampton, UK
- Medical Retina Service, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Michael Quinn
- Centre for Public Health, Queen's University Belfast, Faculty of Medicine Health and Life Sciences, Belfast, UK
| | - Savita Madhusudhan
- St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Konstantinos Balaskas
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Faculty of Medicine Health and Life Sciences, Belfast, UK.
| |
Collapse
|
26
|
Joo YS, Rim TH, Koh HB, Yi J, Kim H, Lee G, Kim YA, Kang SW, Kim SS, Park JT. Non-invasive chronic kidney disease risk stratification tool derived from retina-based deep learning and clinical factors. NPJ Digit Med 2023; 6:114. [PMID: 37330576 DOI: 10.1038/s41746-023-00860-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/09/2023] [Indexed: 06/19/2023] Open
Abstract
Despite the importance of preventing chronic kidney disease (CKD), predicting high-risk patients who require active intervention is challenging, especially in people with preserved kidney function. In this study, a predictive risk score for CKD (Reti-CKD score) was derived from a deep learning algorithm using retinal photographs. The performance of the Reti-CKD score was verified using two longitudinal cohorts of the UK Biobank and Korean Diabetic Cohort. Validation was done in people with preserved kidney function, excluding individuals with eGFR <90 mL/min/1.73 m2 or proteinuria at baseline. In the UK Biobank, 720/30,477 (2.4%) participants had CKD events during the 10.8-year follow-up period. In the Korean Diabetic Cohort, 206/5014 (4.1%) had CKD events during the 6.1-year follow-up period. When the validation cohorts were divided into quartiles of Reti-CKD score, the hazard ratios for CKD development were 3.68 (95% Confidence Interval [CI], 2.88-4.41) in the UK Biobank and 9.36 (5.26-16.67) in the Korean Diabetic Cohort in the highest quartile compared to the lowest. The Reti-CKD score, compared to eGFR based methods, showed a superior concordance index for predicting CKD incidence, with a delta of 0.020 (95% CI, 0.011-0.029) in the UK Biobank and 0.024 (95% CI, 0.002-0.046) in the Korean Diabetic Cohort. In people with preserved kidney function, the Reti-CKD score effectively stratifies future CKD risk with greater performance than conventional eGFR-based methods.
Collapse
Affiliation(s)
- Young Su Joo
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Republic of Korea
- Division of Nephrology, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Tyler Hyungtaek Rim
- Mediwhale Inc, Seoul, Republic of Korea.
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore.
| | - Hee Byung Koh
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Republic of Korea
- Department of Internal Medicine, International Saint Mary's Hospital, Catholic Kwandong University, Incheon, Republic of Korea
| | - Joseph Yi
- Albert Einstein College of Medicine, New York, USA
| | | | | | - Young Ah Kim
- Division of Digital Health, Yonsei University Health System, Seoul, Republic of Korea
| | - Shin-Wook Kang
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Republic of Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jung Tak Park
- Department of Internal Medicine, College of Medicine, Institute of Kidney Disease Research, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
27
|
Li H, Cao J, Grzybowski A, Jin K, Lou L, Ye J. Diagnosing Systemic Disorders with AI Algorithms Based on Ocular Images. Healthcare (Basel) 2023; 11:1739. [PMID: 37372857 PMCID: PMC10298137 DOI: 10.3390/healthcare11121739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
The advent of artificial intelligence (AI), especially the state-of-the-art deep learning frameworks, has begun a silent revolution in all medical subfields, including ophthalmology. Due to their specific microvascular and neural structures, the eyes are anatomically associated with the rest of the body. Hence, ocular image-based AI technology may be a useful alternative or additional screening strategy for systemic diseases, especially where resources are scarce. This review summarizes the current applications of AI related to the prediction of systemic diseases from multimodal ocular images, including cardiovascular diseases, dementia, chronic kidney diseases, and anemia. Finally, we also discuss the current predicaments and future directions of these applications.
Collapse
Affiliation(s)
- Huimin Li
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznan, Poland;
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Lixia Lou
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| |
Collapse
|
28
|
Vasseneix C, Nusinovici S, Xu X, Hwang JM, Hamann S, Chen JJ, Loo JL, Milea L, Tan KBK, Ting DSW, Liu Y, Newman NJ, Biousse V, Wong TY, Milea D, Najjar RP. Deep Learning System Outperforms Clinicians in Identifying Optic Disc Abnormalities. J Neuroophthalmol 2023; 43:159-167. [PMID: 36719740 DOI: 10.1097/wno.0000000000001800] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND The examination of the optic nerve head (optic disc) is mandatory in patients with headache, hypertension, or any neurological symptoms, yet it is rarely or poorly performed in general clinics. We recently developed a brain and optic nerve study with artificial intelligence-deep learning system (BONSAI-DLS) capable of accurately detecting optic disc abnormalities including papilledema (swelling due to elevated intracranial pressure) on digital fundus photographs with a comparable classification performance to expert neuro-ophthalmologists, but its performance compared to first-line clinicians remains unknown. METHODS In this international, cross-sectional multicenter study, the DLS, trained on 14,341 fundus photographs, was tested on a retrospectively collected convenience sample of 800 photographs (400 normal optic discs, 201 papilledema and 199 other abnormalities) from 454 patients with a robust ground truth diagnosis provided by the referring expert neuro-ophthalmologists. The areas under the receiver-operating-characteristic curves were calculated for the BONSAI-DLS. Error rates, accuracy, sensitivity, and specificity of the algorithm were compared with those of 30 clinicians with or without ophthalmic training (6 general ophthalmologists, 6 optometrists, 6 neurologists, 6 internists, 6 emergency department [ED] physicians) who graded the same testing set of images. RESULTS With an error rate of 15.3%, the DLS outperformed all clinicians (average error rates 24.4%, 24.8%, 38.2%, 44.8%, 47.9% for general ophthalmologists, optometrists, neurologists, internists and ED physicians, respectively) in the overall classification of optic disc appearance. The DLS displayed significantly higher accuracies than 100%, 86.7% and 93.3% of clinicians (n = 30) for the classification of papilledema, normal, and other disc abnormalities, respectively. CONCLUSIONS The performance of the BONSAI-DLS to classify optic discs on fundus photographs was superior to that of clinicians with or without ophthalmic training. A trained DLS may offer valuable diagnostic aid to clinicians from various clinical settings for the screening of optic disc abnormalities harboring potentially sight- or life-threatening neurological conditions.
Collapse
Affiliation(s)
- Caroline Vasseneix
- Visual Neuroscience Group (CV, SN, DT, TYW, DM, RPN), Singapore Eye Research Institute, Singapore; Duke NUS Medical School (DT, TYW, DM, RPN), National University of Singapore, Singapore; Institute of High Performance Computing (XX, YL), Agency for Science, Technology and Research (A*STAR), Singapore; Department of Ophthalmology (J-MH), Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam-si, Korea (the Republic of); Department of Ophthalmology (SH), Rigshospitalet, University of Copenhagen, Kobenhavn, Denmark ; Departments of Ophthalmology and Neurology (JJC), Mayo Clinic Rochester, Minnesota; Singapore National Eye Centre (JLL, DT, TYW, DM), Singapore; Berkeley University (LM), Berkeley, California; Department of Emergency Medicine (KT), Singapore General Hospital, Singapore; Departments of Ophthalmology, Neurology and Neurological Surgery (NJN, VB), Emory University School of Medicine, Atlanta, Georgia; and Department of Ophthalmology (RPN), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Tan Y, Sun X. Ocular images-based artificial intelligence on systemic diseases. Biomed Eng Online 2023; 22:49. [PMID: 37208715 DOI: 10.1186/s12938-023-01110-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/02/2023] [Indexed: 05/21/2023] Open
Abstract
PURPOSE To provide a summary of the research advances on ocular images-based artificial intelligence on systemic diseases. METHODS Narrative literature review. RESULTS Ocular images-based artificial intelligence has been used in a variety of systemic diseases, including endocrine, cardiovascular, neurological, renal, autoimmune, and hematological diseases, and many others. However, the studies are still at an early stage. The majority of studies have used AI only for diseases diagnosis, and the specific mechanisms linking systemic diseases to ocular images are still unclear. In addition, there are many limitations to the research, such as the number of images, the interpretability of artificial intelligence, rare diseases, and ethical and legal issues. CONCLUSION While ocular images-based artificial intelligence is widely used, the relationship between the eye and the whole body should be more clearly elucidated.
Collapse
Affiliation(s)
- Yuhe Tan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Xufang Sun
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China.
| |
Collapse
|
30
|
Babenko B, Traynis I, Chen C, Singh P, Uddin A, Cuadros J, Daskivich LP, Maa AY, Kim R, Kang EYC, Matias Y, Corrado GS, Peng L, Webster DR, Semturs C, Krause J, Varadarajan AV, Hammel N, Liu Y. A deep learning model for novel systemic biomarkers in photographs of the external eye: a retrospective study. Lancet Digit Health 2023; 5:e257-e264. [PMID: 36966118 DOI: 10.1016/s2589-7500(23)00022-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 03/27/2023]
Abstract
BACKGROUND Photographs of the external eye were recently shown to reveal signs of diabetic retinal disease and elevated glycated haemoglobin. This study aimed to test the hypothesis that external eye photographs contain information about additional systemic medical conditions. METHODS We developed a deep learning system (DLS) that takes external eye photographs as input and predicts systemic parameters, such as those related to the liver (albumin, aspartate aminotransferase [AST]); kidney (estimated glomerular filtration rate [eGFR], urine albumin-to-creatinine ratio [ACR]); bone or mineral (calcium); thyroid (thyroid stimulating hormone); and blood (haemoglobin, white blood cells [WBC], platelets). This DLS was trained using 123 130 images from 38 398 patients with diabetes undergoing diabetic eye screening in 11 sites across Los Angeles county, CA, USA. Evaluation focused on nine prespecified systemic parameters and leveraged three validation sets (A, B, C) spanning 25 510 patients with and without diabetes undergoing eye screening in three independent sites in Los Angeles county, CA, and the greater Atlanta area, GA, USA. We compared performance against baseline models incorporating available clinicodemographic variables (eg, age, sex, race and ethnicity, years with diabetes). FINDINGS Relative to the baseline, the DLS achieved statistically significant superior performance at detecting AST >36·0 U/L, calcium <8·6 mg/dL, eGFR <60·0 mL/min/1·73 m2, haemoglobin <11·0 g/dL, platelets <150·0 × 103/μL, ACR ≥300 mg/g, and WBC <4·0 × 103/μL on validation set A (a population resembling the development datasets), with the area under the receiver operating characteristic curve (AUC) of the DLS exceeding that of the baseline by 5·3-19·9% (absolute differences in AUC). On validation sets B and C, with substantial patient population differences compared with the development datasets, the DLS outperformed the baseline for ACR ≥300·0 mg/g and haemoglobin <11·0 g/dL by 7·3-13·2%. INTERPRETATION We found further evidence that external eye photographs contain biomarkers spanning multiple organ systems. Such biomarkers could enable accessible and non-invasive screening of disease. Further work is needed to understand the translational implications. FUNDING Google.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Lauren P Daskivich
- Ophthalmic Services and Eye Health Programs, Los Angeles County Department of Health Services, Los Angeles, CA, USA; Department of Ophthalmology, University of Southern California Keck School of Medicine/Roski Eye Institute, Los Angeles, CA USA
| | - April Y Maa
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, GA, USA; Regional Telehealth Services, Technology-based Eye Care Services (TECS) division, Veterans Integrated Service Network (VISN) 7, Decatur, GA, USA
| | - Ramasamy Kim
- Aravind Eye Hospital, Madurai, Tamil Nadu, India
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | | | | | | | | | | | | | | | | | - Yun Liu
- Google Health, Palo Alto, CA, USA.
| |
Collapse
|
31
|
Yi JK, Rim TH, Park S, Kim SS, Kim HC, Lee CJ, Kim H, Lee G, Lim JSG, Tan YY, Yu M, Tham YC, Bakhai A, Shantsila E, Leeson P, Lip GYH, Chin CWL, Cheng CY. Cardiovascular disease risk assessment using a deep-learning-based retinal biomarker: a comparison with existing risk scores. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2023; 4:236-244. [PMID: 37265875 PMCID: PMC10232236 DOI: 10.1093/ehjdh/ztad023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 02/25/2023] [Accepted: 03/24/2023] [Indexed: 06/03/2023]
Abstract
Aims This study aims to evaluate the ability of a deep-learning-based cardiovascular disease (CVD) retinal biomarker, Reti-CVD, to identify individuals with intermediate- and high-risk for CVD. Methods and results We defined the intermediate- and high-risk groups according to Pooled Cohort Equation (PCE), QRISK3, and modified Framingham Risk Score (FRS). Reti-CVD's prediction was compared to the number of individuals identified as intermediate- and high-risk according to standard CVD risk assessment tools, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated to assess the results. In the UK Biobank, among 48 260 participants, 20 643 (42.8%) and 7192 (14.9%) were classified into the intermediate- and high-risk groups according to PCE, and QRISK3, respectively. In the Singapore Epidemiology of Eye Diseases study, among 6810 participants, 3799 (55.8%) were classified as intermediate- and high-risk group according to modified FRS. Reti-CVD identified PCE-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.7%, 87.6%, 86.5%, and 84.0%, respectively. Reti-CVD identified QRISK3-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.6%, 85.5%, 49.9%, and 96.6%, respectively. Reti-CVD identified intermediate- and high-risk groups according to the modified FRS with a sensitivity, specificity, PPV, and NPV of 82.1%, 80.6%, 76.4%, and 85.5%, respectively. Conclusion The retinal photograph biomarker (Reti-CVD) was able to identify individuals with intermediate and high-risk for CVD, in accordance with existing risk assessment tools.
Collapse
Affiliation(s)
- Joseph Keunhong Yi
- Albert Einstein College of Medicine, 1300 Morris Park Ave, Bronx, NY 10461, USA
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore 169856, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, 8 College Rd, Singapore 169857, Singapore
- Mediwhale Inc., 43, Digital-ro 34- gil, Guro-gu, Seoul 08378, Republic of Korea
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, 50-1, Yonsei-Ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Sung Soo Kim
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, 50-1, Yonsei-Ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hyeon Chang Kim
- Department of Preventive Medicine, Yonsei University College of Medicine, 50-1, Yonsei-Ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, 50-1, Yonsei-Ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hyeonmin Kim
- Mediwhale Inc., 43, Digital-ro 34- gil, Guro-gu, Seoul 08378, Republic of Korea
| | - Geunyoung Lee
- Mediwhale Inc., 43, Digital-ro 34- gil, Guro-gu, Seoul 08378, Republic of Korea
| | - James Soo Ghim Lim
- Mediwhale Inc., 43, Digital-ro 34- gil, Guro-gu, Seoul 08378, Republic of Korea
| | - Yong Yu Tan
- School of Medicine, University College Cork, College Road, Cork T12 K8AF, Ireland
| | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore 169856, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore 169856, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr, Singapore 117597, Singapore
| | - Ameet Bakhai
- Department of Cardiology, Royal Free Hospital London NHS Foundation Trust, Barnet General Hospital, Pond St, London NW3 2QG, UK
- Amore Health Ltd, London, UK
| | - Eduard Shantsila
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool L69 3BX, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool L69 3BX, UK
| | - Paul Leeson
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford OX1 2JD, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool L69 3BX, UK
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Calvin W L Chin
- National Heart Research Institute Singapore, National Heart Centre Singapore, 5 Hospital Dr, Singapore 169609, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, The Academia, 20 College Rd, Level 6 Discovery Tower, Singapore 169856, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, 8 College Rd, Singapore 169857, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr, Singapore 117597, Singapore
| |
Collapse
|
32
|
Huang W, Suominen H, Liu T, Rice G, Salomon C, Barnard AS. Explainable discovery of disease biomarkers: The case of ovarian cancer to illustrate the best practice in machine learning and Shapley analysis. J Biomed Inform 2023; 141:104365. [PMID: 37062419 DOI: 10.1016/j.jbi.2023.104365] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/24/2023] [Accepted: 04/10/2023] [Indexed: 04/18/2023]
Abstract
OBJECTIVE Ovarian cancer is a significant health issue with lasting impacts on the community. Despite recent advances in surgical, chemotherapeutic and radiotherapeutic interventions, they have had only marginal impacts due to an inability to identify biomarkers at an early stage. Biomarker discovery is challenging, yet essential for improving drug discovery and clinical care. Machine learning (ML) techniques are invaluable for recognising complex patterns in biomarkers compared to conventional methods, yet they can lack physical insights into diagnosis. eXplainable Artificial Intelligence (XAI) is capable of providing deeper insights into the decision-making of complex ML algorithms increasing their applicability. We aim to introduce best practice for combining ML and XAI techniques for biomarker validation tasks. METHODS We focused on classification tasks and a game theoretic approach based on Shapley values to build and evaluate models and visualise results. We described the workflow and apply the pipeline in a case study using the CDAS PLCO Ovarian Biomarkers dataset to demonstrate the potential for accuracy and utility. RESULTS The case study results demonstrate the efficacy of the ML pipeline, its consistency, and advantages compared to conventional statistical approaches. CONCLUSION The resulting guidelines provide a general framework for practical application of XAI in medical research that can inform clinicians and validate and explain cancer biomarkers.
Collapse
Affiliation(s)
- Weitong Huang
- School of Computing, Australian National University, Acton, ACT 2601, Australia.
| | - Hanna Suominen
- School of Computing, Australian National University, Acton, ACT 2601, Australia; Department of Computing, University of Turku, Turku, Finland
| | - Tommy Liu
- School of Computing, Australian National University, Acton, ACT 2601, Australia
| | - Gregory Rice
- Exosome Biology Laboratory, Centre for Clinical Diagnostics, University of Queensland Centre for Clinical Research, Royal Brisbane and Women's Hospital, Faculty of Medicine, The University of Queensland, Brisbane, Australia; Inoviq Limited, Notting Hill, Australia
| | - Carlos Salomon
- Exosome Biology Laboratory, Centre for Clinical Diagnostics, University of Queensland Centre for Clinical Research, Royal Brisbane and Women's Hospital, Faculty of Medicine, The University of Queensland, Brisbane, Australia; Translational Extracellular Vesicles in Obstetrics and Gynae-Oncology Group, Centre for Clinical Diagnostics, University of Queensland Centre for Clinical Research, Royal Brisbane and Women's Hospital, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | - Amanda S Barnard
- School of Computing, Australian National University, Acton, ACT 2601, Australia
| |
Collapse
|
33
|
Zhu Z, Shi D, Guankai P, Tan Z, Shang X, Hu W, Liao H, Zhang X, Huang Y, Yu H, Meng W, Wang W, Ge Z, Yang X, He M. Retinal age gap as a predictive biomarker for mortality risk. Br J Ophthalmol 2023; 107:547-554. [PMID: 35042683 DOI: 10.1136/bjophthalmol-2021-319807] [Citation(s) in RCA: 47] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 10/27/2021] [Indexed: 01/09/2023]
Abstract
AIM To develop a deep learning (DL) model that predicts age from fundus images (retinal age) and to investigate the association between retinal age gap (retinal age predicted by DL model minus chronological age) and mortality risk. METHODS A total of 80 169 fundus images taken from 46 969 participants in the UK Biobank with reasonable quality were included in this study. Of these, 19 200 fundus images from 11 052 participants without prior medical history at the baseline examination were used to train and validate the DL model for age prediction using fivefold cross-validation. A total of 35 913 of the remaining 35 917 participants had available mortality data and were used to investigate the association between retinal age gap and mortality. RESULTS The DL model achieved a strong correlation of 0.81 (p<0·001) between retinal age and chronological age, and an overall mean absolute error of 3.55 years. Cox regression models showed that each 1 year increase in the retinal age gap was associated with a 2% increase in risk of all-cause mortality (hazard ratio (HR)=1.02, 95% CI 1.00 to 1.03, p=0.020) and a 3% increase in risk of cause-specific mortality attributable to non-cardiovascular and non-cancer disease (HR=1.03, 95% CI 1.00 to 1.05, p=0.041) after multivariable adjustments. No significant association was identified between retinal age gap and cardiovascular- or cancer-related mortality. CONCLUSIONS Our findings indicate that retinal age gap might be a potential biomarker of ageing that is closely related to risk of mortality, implying the potential of retinal image as a screening tool for risk stratification and delivery of tailored interventions.
Collapse
Affiliation(s)
- Zhuoting Zhu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Peng Guankai
- Guangzhou Vision Tech Medical Technology Co., Ltd, GuangZhou, China
| | - Zachary Tan
- Centre for Eye Research Australia; Ophthalmology, University of Melbourne, East Melbourne, Victoria, Australia
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Wenyi Hu
- Centre for Eye Research Australia; Ophthalmology, University of Melbourne, East Melbourne, Victoria, Australia
| | - Huan Liao
- Neural Regeneration Group, Institute of Reconstructive Neurobiology, University of Bonn, Bonn, Germany
| | - Xueli Zhang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Yu Huang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Wei Meng
- Guangzhou Vision Tech Medical Technology Co., Ltd, GuangZhou, China
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Zongyuan Ge
- Monash e-Research Centre, Monash University, Melbourne, Victoria, Australia
- Monash Medical AI Group, Monash University, Melbourne, Victoria, Australia
| | - Xiaohong Yang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Mingguang He
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, People's Republic of China
- Centre for Eye Research Australia; Ophthalmology, University of Melbourne, East Melbourne, Victoria, Australia
| |
Collapse
|
34
|
Chan YK, Cheng CY, Sabanayagam C. Eyes as the windows into cardiovascular disease in the era of big data. Taiwan J Ophthalmol 2023; 13:151-167. [PMID: 37484607 PMCID: PMC10361436 DOI: 10.4103/tjo.tjo-d-23-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/11/2023] [Indexed: 07/25/2023] Open
Abstract
Cardiovascular disease (CVD) is a major cause of mortality and morbidity worldwide and imposes significant socioeconomic burdens, especially with late diagnoses. There is growing evidence of strong correlations between ocular images, which are information-dense, and CVD progression. The accelerating development of deep learning algorithms (DLAs) is a promising avenue for research into CVD biomarker discovery, early CVD diagnosis, and CVD prognostication. We review a selection of 17 recent DLAs on the less-explored realm of DL as applied to ocular images to produce CVD outcomes, potential challenges in their clinical deployment, and the path forward. The evidence for CVD manifestations in ocular images is well documented. Most of the reviewed DLAs analyze retinal fundus photographs to predict CV risk factors, in particular hypertension. DLAs can predict age, sex, smoking status, alcohol status, body mass index, mortality, myocardial infarction, stroke, chronic kidney disease, and hematological disease with significant accuracy. While the cardio-oculomics intersection is now burgeoning, very much remain to be explored. The increasing availability of big data, computational power, technological literacy, and acceptance all prime this subfield for rapid growth. We pinpoint the specific areas of improvement toward ubiquitous clinical deployment: increased generalizability, external validation, and universal benchmarking. DLAs capable of predicting CVD outcomes from ocular inputs are of great interest and promise to individualized precision medicine and efficiency in the provision of health care with yet undetermined real-world efficacy with impactful initial results.
Collapse
Affiliation(s)
- Yarn Kit Chan
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
35
|
Lee YJ, Sun S, Kim YK, Jeoung JW, Park KH. Diagnostic ability of macular microvasculature with swept-source OCT angiography for highly myopic glaucoma using deep learning. Sci Rep 2023; 13:5209. [PMID: 36997639 PMCID: PMC10063664 DOI: 10.1038/s41598-023-32164-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 03/23/2023] [Indexed: 04/01/2023] Open
Abstract
Macular OCT angiography (OCTA) measurements have been reported to be useful for glaucoma diagnostics. However, research on highly myopic glaucoma is lacking, and the diagnostic value of macular OCTA measurements versus OCT parameters remains inconclusive. We aimed to evaluate the diagnostic ability of the macular microvasculature assessed with OCTA for highly myopic glaucoma and to compare it with that of macular thickness parameters, using deep learning (DL). A DL model was trained, validated and tested using 260 pairs of macular OCTA and OCT images from 260 eyes (203 eyes with highly myopic glaucoma, 57 eyes with healthy high myopia). The DL model achieved an AUC of 0.946 with the OCTA superficial capillary plexus (SCP) images, which was comparable to that with the OCT GCL+ (ganglion cell layer + inner plexiform layer; AUC, 0.982; P = 0.268) or OCT GCL++ (retinal nerve fiber layer + ganglion cell layer + inner plexiform layer) images (AUC, 0.997; P = 0.101), and significantly superior to that with the OCTA deep capillary plexus images (AUC, 0.779; P = 0.028). The DL model with macular OCTA SCP images demonstrated excellent and comparable diagnostic ability to that with macular OCT images in highly myopic glaucoma, which suggests macular OCTA microvasculature could serve as a potential biomarker for glaucoma diagnosis in high myopia.
Collapse
Affiliation(s)
- Yun Jeong Lee
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Sukkyu Sun
- Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
| | - Young Kook Kim
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Wook Jeoung
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Ki Ho Park
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
36
|
Wen J, Liu D, Wu Q, Zhao L, Iao WC, Lin H. Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
Affiliation(s)
- Jingyi Wen
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Dong Liu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Qianni Wu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Lanqin Zhao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Wai Cheng Iao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Haotian Lin
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics Zhongshan School of Medicine Sun Yat‐sen University Guangzhou China
| |
Collapse
|
37
|
Glocker B, Jones C, Bernhardt M, Winzeck S. Algorithmic encoding of protected characteristics in chest X-ray disease detection models. EBioMedicine 2023; 89:104467. [PMID: 36791660 PMCID: PMC10025760 DOI: 10.1016/j.ebiom.2023.104467] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 01/23/2023] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. An algorithm may encode protected characteristics, and then use this information for making predictions due to undesirable correlations in the (historical) training data. It remains unclear how we can establish whether such information is actually used. Besides the scarcity of data from underserved populations, very little is known about how dataset biases manifest in predictive models and how this may result in disparate performance. This article aims to shed some light on these issues by exploring methodology for subgroup analysis in image-based disease detection models. METHODS We utilize two publicly available chest X-ray datasets, CheXpert and MIMIC-CXR, to study performance disparities across race and biological sex in deep learning models. We explore test set resampling, transfer learning, multitask learning, and model inspection to assess the relationship between the encoding of protected characteristics and disease detection performance across subgroups. FINDINGS We confirm subgroup disparities in terms of shifted true and false positive rates which are partially removed after correcting for population and prevalence shifts in the test sets. We find that transfer learning alone is insufficient for establishing whether specific patient information is used for making predictions. The proposed combination of test-set resampling, multitask learning, and model inspection reveals valuable insights about the way protected characteristics are encoded in the feature representations of deep neural networks. INTERPRETATION Subgroup analysis is key for identifying performance disparities of AI models, but statistical differences across subgroups need to be taken into account when analyzing potential biases in disease detection. The proposed methodology provides a comprehensive framework for subgroup analysis enabling further research into the underlying causes of disparities. FUNDING European Research Council Horizon 2020, UK Research and Innovation.
Collapse
Affiliation(s)
- Ben Glocker
- Department of Computing, Imperial College London, London, SW7 2AZ, UK.
| | - Charles Jones
- Department of Computing, Imperial College London, London, SW7 2AZ, UK
| | - Mélanie Bernhardt
- Department of Computing, Imperial College London, London, SW7 2AZ, UK
| | - Stefan Winzeck
- Department of Computing, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
38
|
Iao WC, Zhang W, Wang X, Wu Y, Lin D, Lin H. Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050900. [PMID: 36900043 PMCID: PMC10001234 DOI: 10.3390/diagnostics13050900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
Deep learning (DL) is the new high-profile technology in medical artificial intelligence (AI) for building screening and diagnosing algorithms for various diseases. The eye provides a window for observing neurovascular pathophysiological changes. Previous studies have proposed that ocular manifestations indicate systemic conditions, revealing a new route in disease screening and management. There have been multiple DL models developed for identifying systemic diseases based on ocular data. However, the methods and results varied immensely across studies. This systematic review aims to summarize the existing studies and provide an overview of the present and future aspects of DL-based algorithms for screening systemic diseases based on ophthalmic examinations. We performed a thorough search in PubMed®, Embase, and Web of Science for English-language articles published until August 2022. Among the 2873 articles collected, 62 were included for analysis and quality assessment. The selected studies mainly utilized eye appearance, retinal data, and eye movements as model input and covered a wide range of systemic diseases such as cardiovascular diseases, neurodegenerative diseases, and systemic health features. Despite the decent performance reported, most models lack disease specificity and public generalizability for real-world application. This review concludes the pros and cons and discusses the prospect of implementing AI based on ocular data in real-world clinical scenarios.
Collapse
Affiliation(s)
- Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Weixing Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510060, China
- Correspondence:
| |
Collapse
|
39
|
Thakur S, Rim TH, Ting DSJ, Hsieh YT, Kim TI. Editorial: Big data and artificial intelligence in ophthalmology. Front Med (Lausanne) 2023; 10:1145522. [PMID: 36865059 PMCID: PMC9971986 DOI: 10.3389/fmed.2023.1145522] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 02/01/2023] [Indexed: 02/16/2023] Open
Affiliation(s)
- Sahil Thakur
- Department of Ocular Epidemiology, Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Department of Ocular Epidemiology, Singapore Eye Research Institute, Singapore, Singapore,Mediwhale Inc., Seoul, Republic of Korea
| | - Darren S. J. Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom,Birmingham and Midland Eye Centre, Birmingham, United Kingdom,Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan,Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Tae-im Kim
- Department of Ophthalmology, The Institute of Vision Research, Yonsei University College of Medicine, Seoul, Republic of Korea,Department of Ophthalmology, Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea,*Correspondence: Tae-im Kim ✉
| |
Collapse
|
40
|
Tseng RMWW, Rim TH, Shantsila E, Yi JK, Park S, Kim SS, Lee CJ, Thakur S, Nusinovici S, Peng Q, Kim H, Lee G, Yu M, Tham YC, Bakhai A, Leeson P, Lip GYH, Wong TY, Cheng CY. Validation of a deep-learning-based retinal biomarker (Reti-CVD) in the prediction of cardiovascular disease: data from UK Biobank. BMC Med 2023; 21:28. [PMID: 36691041 PMCID: PMC9872417 DOI: 10.1186/s12916-022-02684-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/28/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Currently in the United Kingdom, cardiovascular disease (CVD) risk assessment is based on the QRISK3 score, in which 10% 10-year CVD risk indicates clinical intervention. However, this benchmark has limited efficacy in clinical practice and the need for a more simple, non-invasive risk stratification tool is necessary. Retinal photography is becoming increasingly acceptable as a non-invasive imaging tool for CVD. Previously, we developed a novel CVD risk stratification system based on retinal photographs predicting future CVD risk. This study aims to further validate our biomarker, Reti-CVD, (1) to detect risk group of ≥ 10% in 10-year CVD risk and (2) enhance risk assessment in individuals with QRISK3 of 7.5-10% (termed as borderline-QRISK3 group) using the UK Biobank. METHODS Reti-CVD scores were calculated and stratified into three risk groups based on optimized cut-off values from the UK Biobank. We used Cox proportional-hazards models to evaluate the ability of Reti-CVD to predict CVD events in the general population. C-statistics was used to assess the prognostic value of adding Reti-CVD to QRISK3 in borderline-QRISK3 group and three vulnerable subgroups. RESULTS Among 48,260 participants with no history of CVD, 6.3% had CVD events during the 11-year follow-up. Reti-CVD was associated with an increased risk of CVD (adjusted hazard ratio [HR] 1.41; 95% confidence interval [CI], 1.30-1.52) with a 13.1% (95% CI, 11.7-14.6%) 10-year CVD risk in Reti-CVD-high-risk group. The 10-year CVD risk of the borderline-QRISK3 group was greater than 10% in Reti-CVD-high-risk group (11.5% in non-statin cohort [n = 45,473], 11.5% in stage 1 hypertension cohort [n = 11,966], and 14.2% in middle-aged cohort [n = 38,941]). C statistics increased by 0.014 (0.010-0.017) in non-statin cohort, 0.013 (0.007-0.019) in stage 1 hypertension cohort, and 0.023 (0.018-0.029) in middle-aged cohort for CVD event prediction after adding Reti-CVD to QRISK3. CONCLUSIONS Reti-CVD has the potential to identify individuals with ≥ 10% 10-year CVD risk who are likely to benefit from earlier preventative CVD interventions. For borderline-QRISK3 individuals with 10-year CVD risk between 7.5 and 10%, Reti-CVD could be used as a risk enhancer tool to help improve discernment accuracy, especially in adult groups that may be pre-disposed to CVD.
Collapse
Affiliation(s)
- Rachel Marjorie Wei Wen Tseng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore.
- Mediwhale Inc., Seoul, South Korea.
| | - Eduard Shantsila
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool, UK
| | - Joseph K Yi
- Albert Einstein College of Medicine, New York, NY, USA
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sung Soo Kim
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Qingsheng Peng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Clinical and Translational Sciences Program, Duke-NUS Medical School, Singapore, Singapore
| | | | | | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ameet Bakhai
- Royal Free Hospital London NHS Foundation Trust, London, UK
- Cardiology Department, Barnet General Hospital, Thames House, Enfield, UK
| | - Paul Leeson
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool John Moores University and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom; and Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
41
|
Barriada RG, Masip D. An Overview of Deep-Learning-Based Methods for Cardiovascular Risk Assessment with Retinal Images. Diagnostics (Basel) 2022; 13:diagnostics13010068. [PMID: 36611360 PMCID: PMC9818382 DOI: 10.3390/diagnostics13010068] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/19/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early diagnosis of several systemic diseases. There is a large corpus of RFI systematically acquired for diagnosing eye-related diseases that could be used for CVDs prevention. Nevertheless, public health systems cannot afford to dedicate expert physicians to only deal with this data, posing the need for automated diagnosis tools that can raise alarms for patients at risk. Artificial Intelligence (AI) and, particularly, deep learning models, became a strong alternative to provide computerized pre-diagnosis for patient risk retrieval. This paper provides a novel review of the major achievements of the recent state-of-the-art DL approaches to automated CVDs diagnosis. This overview gathers commonly used datasets, pre-processing techniques, evaluation metrics and deep learning approaches used in 30 different studies. Based on the reviewed articles, this work proposes a classification taxonomy depending on the prediction target and summarizes future research challenges that have to be tackled to progress in this line.
Collapse
|
42
|
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J Clin Med 2022; 12:jcm12010152. [PMID: 36614953 PMCID: PMC9821402 DOI: 10.3390/jcm12010152] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/17/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.
Collapse
|
43
|
Vujosevic S, Limoli C, Luzi L, Nucci P. Digital innovations for retinal care in diabetic retinopathy. Acta Diabetol 2022; 59:1521-1530. [PMID: 35962258 PMCID: PMC9374293 DOI: 10.1007/s00592-022-01941-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 07/04/2022] [Indexed: 12/02/2022]
Abstract
AIM The purpose of this review is to examine the applications of novel digital technology domains for the screening and management of patients with diabetic retinopathy (DR). METHODS A PubMed engine search was performed, using the terms "Telemedicine", "Digital health", "Telehealth", "Telescreening", "Artificial intelligence", "Deep learning", "Smartphone", "Triage", "Screening", "Home-based", "Monitoring", "Ophthalmology", "Diabetes", "Diabetic Retinopathy", "Retinal imaging". Full-text English language studies from January 1, 2010, to February 1, 2022, and reference lists were considered for the conceptual framework of this review. RESULTS Diabetes mellitus and its eye complications, including DR, are particularly well suited to digital technologies, providing an ideal model for telehealth initiatives and real-world applications. The current development in the adoption of telemedicine, artificial intelligence and remote monitoring as an alternative to or in addition to traditional forms of care will be discussed. CONCLUSIONS Advances in digital health have created an ecosystem ripe for telemedicine in the field of DR to thrive. Stakeholders and policymakers should adopt a participatory approach to ensure sustained implementation of these technologies after the COVID-19 pandemic. This article belongs to the Topical Collection "Diabetic Eye Disease", managed by Giuseppe Querques.
Collapse
Affiliation(s)
- Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy.
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy.
| | - Celeste Limoli
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy
- University of Milan, Milan, Italy
| | - Livio Luzi
- Department of Biomedical Sciences for Health, University of Milan, Milan, Italy
- Department of Endocrinology, Nutrition and Metabolic Diseases, IRCCS MultiMedica, Milan, Italy
| | - Paolo Nucci
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
| |
Collapse
|
44
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
45
|
Yang J, Wu S, Dai R, Yu W, Chen Y. Publication trends of artificial intelligence in retina in 10 years: Where do we stand? Front Med (Lausanne) 2022; 9:1001673. [PMID: 36405613 PMCID: PMC9666394 DOI: 10.3389/fmed.2022.1001673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/20/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) has been applied in the field of retina. The purpose of this study was to analyze the study trends within AI in retina by reporting on publication trends, to identify journals, countries, authors, international collaborations, and keywords involved in AI in retina. MATERIALS AND METHODS A cross-sectional study. Bibliometric methods were used to evaluate global production and development trends in AI in retina since 2012 using Web of Science Core Collection. RESULTS A total of 599 publications were retrieved ultimately. We found that AI in retina is a very attractive topic in scientific and medical community. No journal was found to specialize in AI in retina. The USA, China, and India were the three most productive countries. Authors from Austria, Singapore, and England also had worldwide academic influence. China has shown the greatest rapid increase in publication numbers. International collaboration could increase influence in this field. Keywords revealed that diabetic retinopathy, optical coherence tomography on multiple diseases, algorithm were three popular topics in the field. Most of top journals and top publication on AI in retina were mainly focused on engineering and computing, rather than medicine. CONCLUSION These results helped clarify the current status and future trends in researches of AI in retina. This study may be useful for clinicians and scientists to have a general overview of this field, and better understand the main actors in this field (including authors, journals, and countries). Researches are supposed to focus on more retinal diseases, multiple modal imaging, and performance of AI models in real-world clinical application. Collaboration among countries and institutions is common in current research of AI in retina.
Collapse
Affiliation(s)
- Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shan Wu
- Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,*Correspondence: Youxin Chen,
| |
Collapse
|
46
|
Cheung CY, Ran AR, Wang S, Chan VTT, Sham K, Hilal S, Venketasubramanian N, Cheng CY, Sabanayagam C, Tham YC, Schmetterer L, McKay GJ, Williams MA, Wong A, Au LWC, Lu Z, Yam JC, Tham CC, Chen JJ, Dumitrascu OM, Heng PA, Kwok TCY, Mok VCT, Milea D, Chen CLH, Wong TY. A deep learning model for detection of Alzheimer's disease based on retinal photographs: a retrospective, multicentre case-control study. Lancet Digit Health 2022; 4:e806-e815. [PMID: 36192349 DOI: 10.1016/s2589-7500(22)00169-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/12/2022] [Accepted: 08/19/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND There is no simple model to screen for Alzheimer's disease, partly because the diagnosis of Alzheimer's disease itself is complex-typically involving expensive and sometimes invasive tests not commonly available outside highly specialised clinical settings. We aimed to develop a deep learning algorithm that could use retinal photographs alone, which is the most common method of non-invasive imaging the retina to detect Alzheimer's disease-dementia. METHODS In this retrospective, multicentre case-control study, we trained, validated, and tested a deep learning algorithm to detect Alzheimer's disease-dementia from retinal photographs using retrospectively collected data from 11 studies that recruited patients with Alzheimer's disease-dementia and people without disease from different countries. Our main aim was to develop a bilateral model to detect Alzheimer's disease-dementia from retinal photographs alone. We designed and internally validated the bilateral deep learning model using retinal photographs from six studies. We used the EfficientNet-b2 network as the backbone of the model to extract features from the images. Integrated features from four retinal photographs (optic nerve head-centred and macula-centred fields from both eyes) for each individual were used to develop supervised deep learning models and equip the network with unsupervised domain adaptation technique, to address dataset discrepancy between the different studies. We tested the trained model using five other studies, three of which used PET as a biomarker of significant amyloid β burden (testing the deep learning model between amyloid β positive vs amyloid β negative). FINDINGS 12 949 retinal photographs from 648 patients with Alzheimer's disease and 3240 people without the disease were used to train, validate, and test the deep learning model. In the internal validation dataset, the deep learning model had 83·6% (SD 2·5) accuracy, 93·2% (SD 2·2) sensitivity, 82·0% (SD 3·1) specificity, and an area under the receiver operating characteristic curve (AUROC) of 0·93 (0·01) for detecting Alzheimer's disease-dementia. In the testing datasets, the bilateral deep learning model had accuracies ranging from 79·6% (SD 15·5) to 92·1% (11·4) and AUROCs ranging from 0·73 (SD 0·24) to 0·91 (0·10). In the datasets with data on PET, the model was able to differentiate between participants who were amyloid β positive and those who were amyloid β negative: accuracies ranged from 80·6 (SD 13·4%) to 89·3 (13·7%) and AUROC ranged from 0·68 (SD 0·24) to 0·86 (0·16). In subgroup analyses, the discriminative performance of the model was improved in patients with eye disease (accuracy 89·6% [SD 12·5%]) versus those without eye disease (71·7% [11·6%]) and patients with diabetes (81·9% [SD 20·3%]) versus those without the disease (72·4% [11·7%]). INTERPRETATION A retinal photograph-based deep learning algorithm can detect Alzheimer's disease with good accuracy, showing its potential for screening Alzheimer's disease in a community setting. FUNDING BrightFocus Foundation.
Collapse
Affiliation(s)
- Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China.
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Shujun Wang
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Victor T T Chan
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Saima Hilal
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore
| | | | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Singapore Eye Research Institute, Advanced Ocular Engineering and School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
| | - Gareth J McKay
- Centre for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | | | - Adrian Wong
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Lisa W C Au
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Zhihui Lu
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Jason C Yam
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - John J Chen
- Department of Ophthalmology and Department of Neurology, Mayo Clinic, Rochester, MN, USA
| | - Oana M Dumitrascu
- Department of Neurology and Department of Ophthalmology, Division of Cerebrovascular Diseases, Mayo Clinic College of Medicine and Science, Scottsdale, AZ, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Timothy C Y Kwok
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Vincent C T Mok
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Christopher Li-Hsian Chen
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
47
|
Shokr H, Lush V, Dias IH, Ekárt A, De Moraes G, Gherghel D. The Use of Retinal Microvascular Function and Telomere Length in Age and Blood Pressure Prediction in Individuals with Low Cardiovascular Risk. Cells 2022; 11:3037. [PMID: 36230999 PMCID: PMC9563868 DOI: 10.3390/cells11193037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/24/2022] [Accepted: 09/26/2022] [Indexed: 12/16/2022] Open
Abstract
Ageing represents a major risk factor for many pathologies that limit human lifespan, including cardiovascular diseases. Biological ageing is a good biomarker to assess early individual risk for CVD. However, finding good measurements of biological ageing is an ongoing quest. This study aims to assess the use retinal microvascular function, separate or in combination with telomere length, as a predictor for age and systemic blood pressure in individuals with low cardiovascular risk. In all, 123 healthy participants with low cardiovascular risk were recruited and divided into three groups: group 1 (less than 30 years old), group 2 (31-50 years old) and group 3 (over 50 years old). Relative telomere length (RTL), parameters of retinal microvascular function, CVD circulatory markers and blood pressure (BP) were measured in all individuals. Symbolic regression- analysis was used to infer chronological age and systemic BP measurements using either RTL or a combination of RTL and parameters for retinal microvascular function. RTL decreased significantly with age (p = 0.010). There were also age-related differences between the study groups in retinal arterial time to maximum dilation (p = 0.005), maximum constriction (p = 0.007) and maximum constriction percentage (p = 0.010). In the youngest participants, the error between predicted versus actual values for the chronological age were smallest in the case of using both retinal vascular functions only (p = 0.039) or the combination of this parameter with RTL (p = 0.0045). Systolic BP was better predicted by RTL also only in younger individuals (p = 0.043). The assessment of retinal arterial vascular function is a better predictor than RTL for non-modifiable variables such as age, and only in younger individuals. In the same age group, RTL is better than microvascular function when inferring modifiable risk factors for CVDs. In older individuals, the accumulation of physiological and structural biological changes makes such predictions unreliable.
Collapse
Affiliation(s)
- Hala Shokr
- Vascular Research Laboratory, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, UK
- Pharmacy Division, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, UK
| | - Victoria Lush
- Computer Science, School of Informatics and Digital Engineering, College of Engineering and Physical Sciences, Aston University, Birmingham B4 7ET, UK
| | - Irundika Hk Dias
- Aston Medical School, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, UK
| | - Anikó Ekárt
- Computer Science, School of Informatics and Digital Engineering, College of Engineering and Physical Sciences, Aston University, Birmingham B4 7ET, UK
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Doina Gherghel
- Vascular Research Laboratory, College of Health and Life Sciences, Aston University, Birmingham B4 7ET, UK
- Division of Cardiovascular Sciences, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
48
|
Patil AD, Biousse V, Newman NJ. Artificial intelligence in ophthalmology: an insight into neurodegenerative disease. Curr Opin Ophthalmol 2022; 33:432-439. [PMID: 35819902 DOI: 10.1097/icu.0000000000000877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The aging world population accounts for the increasing prevalence of neurodegenerative diseases such as Alzheimer's and Parkinson's which carry a significant health and economic burden. There is therefore a need for sensitive and specific noninvasive biomarkers for early diagnosis and monitoring. Advances in retinal and optic nerve multimodal imaging as well as the development of artificial intelligence deep learning systems (AI-DLS) have heralded a number of promising advances of which ophthalmologists are at the forefront. RECENT FINDINGS The association among retinal vascular, nerve fiber layer, and macular findings in neurodegenerative disease is well established. In order to optimize the use of these ophthalmic parameters as biomarkers, validated AI-DLS are required to ensure clinical efficacy and reliability. Varied image acquisition methods and protocols as well as variability in neurogenerative disease diagnosis compromise the robustness of ground truths that are paramount to developing high-quality training datasets. SUMMARY In order to produce effective AI-DLS for the diagnosis and monitoring of neurodegenerative disease, multicenter international collaboration is required to prospectively produce large inclusive datasets, acquired through standardized methods and protocols. With a uniform approach, the efficacy of resultant clinical applications will be maximized.
Collapse
Affiliation(s)
| | | | - Nancy J Newman
- Department of Ophthalmology
- Department of Neurology
- Department of Neurological Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
49
|
Kim BR, Yoo TK, Kim HK, Ryu IH, Kim JK, Lee IS, Kim JS, Shin DH, Kim YS, Kim BT. Oculomics for sarcopenia prediction: a machine learning approach toward predictive, preventive, and personalized medicine. EPMA J 2022; 13:367-382. [PMID: 36061832 PMCID: PMC9437169 DOI: 10.1007/s13167-022-00292-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/25/2022] [Indexed: 12/08/2022]
Abstract
Aims Sarcopenia is characterized by a gradual loss of skeletal muscle mass and strength with increased adverse outcomes. Recently, large-scale epidemiological studies have demonstrated a relationship between several chronic disorders and ocular pathological conditions using an oculomics approach. We hypothesized that sarcopenia can be predicted through eye examinations, without invasive tests or radiologic evaluations in the context of predictive, preventive, and personalized medicine (PPPM/3PM). Methods We analyzed data from the Korean National Health and Nutrition Examination Survey (KNHANES). The training set (80%, randomly selected from 2008 to 2010) data were used to construct the machine learning models. Internal (20%, randomly selected from 2008 to 2010) and external (from the KNHANES 2011) validation sets were used to assess the ability to predict sarcopenia. We included 8092 participants in the final dataset. Machine learning models (XGBoost) were trained on ophthalmological examinations and demographic factors to detect sarcopenia. Results In the exploratory analysis, decreased levator function (odds ratio [OR], 1.41; P value <0.001), cataracts (OR, 1.31; P value = 0.013), and age-related macular degeneration (OR, 1.38; P value = 0.026) were associated with an increased risk of sarcopenia in men. In women, an increased risk of sarcopenia was associated with blepharoptosis (OR, 1.23; P value = 0.038) and cataracts (OR, 1.29; P value = 0.010). The XGBoost technique showed areas under the receiver operating characteristic curves (AUCs) of 0.746 and 0.762 in men and women, respectively. The external validation achieved AUCs of 0.751 and 0.785 for men and women, respectively. For practical and fast hands-on experience with the predictive model for practitioners who may be willing to test the whole idea of sarcopenia prediction based on oculomics data, we developed a simple web-based calculator application (https://knhanesoculomics.github.io/sarcopenia) to predict the risk of sarcopenia and facilitate screening, based on the model established in this study. Conclusion Sarcopenia is treatable before the vicious cycle of sarcopenia-related deterioration begins. Therefore, early identification of individuals at a high risk of sarcopenia is essential in the context of PPPM. Our oculomics-based approach provides an effective strategy for sarcopenia prediction. The proposed method shows promise in significantly increasing the number of patients diagnosed with sarcopenia, potentially facilitating earlier intervention. Through patient oculometric monitoring, various pathological factors related to sarcopenia can be simultaneously analyzed, and doctors can provide personalized medical services according to each cause. Further studies are needed to confirm whether such a prediction algorithm can be used in real-world clinical settings to improve the diagnosis of sarcopenia. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-022-00292-3.
Collapse
Affiliation(s)
- Bo Ram Kim
- Department of Ophthalmology, Hangil Eye Hospital, Incheon, Republic of Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University College of Medicine, Dankook University Hospital, Cheonan, Republic of Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- VISUWORKS, Seoul, Republic of Korea
| | - In Sik Lee
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
| | | | | | - Young-Sang Kim
- Department of Family Medicine, CHA Bundang Medical Centre, CHA University, Seongnam, Republic of Korea
| | - Bom Taeck Kim
- Department of Family Practice & Community Health, Ajou University School of Medicine, Suwon, Gyeonggi-do 16499 Republic of Korea
| |
Collapse
|
50
|
Wong DYL, Lam MC, Ran A, Cheung CY. Artificial intelligence in retinal imaging for cardiovascular disease prediction: current trends and future directions. Curr Opin Ophthalmol 2022; 33:440-446. [PMID: 35916571 DOI: 10.1097/icu.0000000000000886] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW Retinal microvasculature assessment has shown promise to enhance cardiovascular disease (CVD) risk stratification. Integrating artificial intelligence into retinal microvasculature analysis may increase the screening capacity of CVD risks compared with risk score calculation through blood-taking. This review summarizes recent advancements in artificial intelligence based retinal photograph analysis for CVD prediction, and suggests challenges and future prospects for translation into a clinical setting. RECENT FINDINGS Artificial intelligence based retinal microvasculature analyses potentially predict CVD risk factors (e.g. blood pressure, diabetes), direct CVD events (e.g. CVD mortality), retinal features (e.g. retinal vessel calibre) and CVD biomarkers (e.g. coronary artery calcium score). However, challenges such as handling photographs with concurrent retinal diseases, limited diverse data from other populations or clinical settings, insufficient interpretability and generalizability, concerns on cost-effectiveness and social acceptance may impede the dissemination of these artificial intelligence algorithms into clinical practice. SUMMARY Artificial intelligence based retinal microvasculature analysis may supplement existing CVD risk stratification approach. Although technical and socioeconomic challenges remain, we envision artificial intelligence based microvasculature analysis to have major clinical and research impacts in the future, through screening for high-risk individuals especially in less-developed areas and identifying new retinal biomarkers for CVD research.
Collapse
Affiliation(s)
- Dragon Y L Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | | |
Collapse
|