1
|
Syed MG, Trucco E, Mookiah MRK, Lang CC, McCrimmon RJ, Palmer CNA, Pearson ER, Doney ASF, Mordi IR. Deep-learning prediction of cardiovascular outcomes from routine retinal images in individuals with type 2 diabetes. Cardiovasc Diabetol 2025; 24:3. [PMID: 39748380 PMCID: PMC11697721 DOI: 10.1186/s12933-024-02564-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 12/24/2024] [Indexed: 01/04/2025] Open
Abstract
BACKGROUND Prior studies have demonstrated an association between retinal vascular features and cardiovascular disease (CVD), however most studies have only evaluated a few simple parameters at a time. Our aim was to determine whether a deep-learning artificial intelligence (AI) model could be used to predict CVD outcomes from routinely obtained diabetic retinal screening photographs and to compare its performance to a traditional clinical CVD risk score. METHODS We included 6127 individuals with type 2 diabetes without myocardial infarction or stroke prior to study entry. The cohort was divided into training (70%), validation (10%) and testing (20%) cohorts. Clinical 10-year CVD risk was calculated using the pooled cohort equation (PCE) risk score. A polygenic risk score (PRS) for coronary heart disease was also obtained. Retinal images were analysed using an EfficientNet-B2 network to predict 10-year CVD risk. The primary outcome was time to first major adverse CV event (MACE) including CV death, myocardial infarction or stroke. RESULTS 1241 individuals were included in the test cohort (mean PCE 10-year CVD risk 35%). There was a strong correlation between retinal predicted CVD risk and the PCE risk score (r = 0.66) but not the polygenic risk score (r = 0.05). There were 288 MACE events. Higher retina-predicted risk was significantly associated with increased 10-year risk of MACE (HR 1.05 per 1% increase; 95% CI 1.04-1.06, p < 0.001) and remained so after adjustment for the PCE and polygenic risk score (HR 1.03; 95% CI 1.02-1.04, p < 0.001). The retinal risk score had similar performance to the PCE (both AUC 0.697) and when combined with the PCE and polygenic risk score had significantly improved performance compared to the PCE alone (AUC 0.728). An increase in retinal-predicted risk within 3 years was associated with subsequent increased MACE likelihood. CONCLUSIONS A deep-learning AI model could accurately predict MACE from routine retinal screening photographs with a comparable performance to traditional clinical risk assessment in a diabetic cohort. Combining the AI-derived retinal risk prediction with a coronary heart disease polygenic risk score improved risk prediction. AI retinal assessment might allow a one-stop CVD risk assessment at routine retinal screening.
Collapse
Affiliation(s)
- Mohammad Ghouse Syed
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Emanuele Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Muthu R K Mookiah
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Chim C Lang
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK
- Tuanku Muhriz Royal Chair, National University of Malaysia, Bangi, Malaysia
| | - Rory J McCrimmon
- Division of Systems Medicine, School of Medicine, University of Dundee, Dundee, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK
| | - Ewan R Pearson
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK
| | - Alex S F Doney
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK
| | - Ify R Mordi
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK.
| |
Collapse
|
2
|
Squirrell DM, Yang S, Xie L, Ang S, Moghadam M, Vaghefi E, McConnell MV. Blood Pressure Predicted From Artificial Intelligence Analysis of Retinal Images Correlates With Future Cardiovascular Events. JACC. ADVANCES 2024; 3:101410. [PMID: 39629061 PMCID: PMC11612377 DOI: 10.1016/j.jacadv.2024.101410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 09/26/2024] [Accepted: 10/08/2024] [Indexed: 12/06/2024]
Abstract
Background High systolic blood pressure (SBP) is one of the leading modifiable risk factors for premature cardiovascular death. The retinal vasculature exhibits well-documented adaptations to high SBP and these vascular changes are known to correlate with atherosclerotic cardiovascular disease (ASCVD) events. Objectives The purpose of this study was to determine whether using artificial intelligence (AI) to predict an individual's SBP from retinal images would more accurately correlate with future ASCVD events compared to measured SBP. Methods 95,665 macula-centered retinal images drawn from the 51,778 individuals in the UK Biobank who had not experienced an ASCVD event prior to retinal imaging were used. A deep-learning model was trained to predict an individual's SBP. The correlation of subsequent ASCVD events with the AI-predicted SBP and the mean of the measured SBP acquired at the time of retinal imaging was determined and compared. Results The overall ASCVD event rate observed was 3.4%. The correlation between SBP and future ASCVD events was significantly higher if the AI-predicted SBP was used compared to the measured SBP: 0.067 v 0.049, P = 0.008. Variability in measured SBP in UK Biobank was present (mean absolute difference = 8.2 mm Hg), which impacted the 10-year ASCVD risk score in 6% of the participants. Conclusions With the variability and challenges of real-world SBP measurement, AI analysis of retinal images may provide a more reliable and accurate biomarker for predicting future ASCVD events than traditionally measured SBP.
Collapse
Affiliation(s)
| | - Song Yang
- Division of Artificial Intelligence, Toku Eyes, Auckland, New Zealand
| | - Li Xie
- Division of Artificial Intelligence, Toku Eyes, Auckland, New Zealand
| | - Songyang Ang
- Division of Artificial Intelligence, Toku Eyes, Auckland, New Zealand
| | | | - Ehsan Vaghefi
- Division of Artificial Intelligence, Toku Eyes, Auckland, New Zealand
| | - Michael V. McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
3
|
Li LY, Isaksen AA, Lebiecka-Johansen B, Funck K, Thambawita V, Byberg S, Andersen TH, Norgaard O, Hulman A. Prediction of cardiovascular markers and diseases using retinal fundus images and deep learning: a systematic scoping review. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2024; 5:660-669. [PMID: 39563905 PMCID: PMC11570365 DOI: 10.1093/ehjdh/ztae068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/28/2024] [Accepted: 09/05/2024] [Indexed: 11/21/2024]
Abstract
Rapid development in deep learning for image analysis inspired studies to focus on predicting cardiovascular risk using retinal fundus images. This scoping review aimed to identify and describe studies using retinal fundus images and deep learning to predict cardiovascular risk markers and diseases. We searched MEDLINE and Embase on 17 November 2023. Abstracts and relevant full-text articles were independently screened by two reviewers. We included studies that used deep learning for the analysis of retinal fundus images to predict cardiovascular risk markers or cardiovascular diseases (CVDs) and excluded studies only using predefined characteristics of retinal fundus images. Study characteristics were presented using descriptive statistics. We included 24 articles published between 2018 and 2023. Among these, 23 (96%) were cross-sectional studies and eight (33%) were follow-up studies with clinical CVD outcomes. Seven studies included a combination of both designs. Most studies (96%) used convolutional neural networks to process images. We found nine (38%) studies that incorporated clinical risk factors in the prediction and four (17%) that compared the results to commonly used clinical risk scores in a prospective setting. Three of these reported improved discriminative performance. External validation of models was rare (21%). There is increasing interest in using retinal fundus images in cardiovascular risk assessment with some studies demonstrating some improvements in prediction. However, more prospective studies, comparisons of results to clinical risk scores, and models augmented with traditional risk factors can strengthen further research in the field.
Collapse
Affiliation(s)
- Livie Yumeng Li
- Department of Public Health, Aarhus University, Bartholins Allé 2, 8000 Aarhus C, Denmark
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Palle Juul-Jensens Boulevard 11, 8200 Aarhus N, Denmark
| | - Anders Aasted Isaksen
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Palle Juul-Jensens Boulevard 11, 8200 Aarhus N, Denmark
| | - Benjamin Lebiecka-Johansen
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Palle Juul-Jensens Boulevard 11, 8200 Aarhus N, Denmark
| | - Kristian Funck
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Palle Juul-Jensens Boulevard 11, 8200 Aarhus N, Denmark
| | - Vajira Thambawita
- Department of Holistic Systems, SimulaMet, Stensberggata 27, 0170 Oslo, Norway
| | - Stine Byberg
- Clinical Epidemiological Research, Copenhagen University Hospital — Steno Diabetes Center Copenhagen, Borgmester Ib Juuls Vej 83, 2730 Herlev, Denmark
| | - Tue Helms Andersen
- Department of Education, Danish Diabetes Knowledge Center, Copenhagen University Hospital — Steno Diabetes Center Copenhagen, Borgmester Ib Juuls Vej 83, 2730 Herlev, Denmark
| | - Ole Norgaard
- Department of Education, Danish Diabetes Knowledge Center, Copenhagen University Hospital — Steno Diabetes Center Copenhagen, Borgmester Ib Juuls Vej 83, 2730 Herlev, Denmark
| | - Adam Hulman
- Department of Public Health, Aarhus University, Bartholins Allé 2, 8000 Aarhus C, Denmark
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Palle Juul-Jensens Boulevard 11, 8200 Aarhus N, Denmark
| |
Collapse
|
4
|
Baharoon M, Almatar H, Alduhayan R, Aldebasi T, Alahmadi B, Bokhari Y, Alawad M, Almazroa A, Aljouie A. HyMNet: A Multimodal Deep Learning System for Hypertension Prediction Using Fundus Images and Cardiometabolic Risk Factors. Bioengineering (Basel) 2024; 11:1080. [PMID: 39593740 PMCID: PMC11591283 DOI: 10.3390/bioengineering11111080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 10/22/2024] [Accepted: 10/22/2024] [Indexed: 11/28/2024] Open
Abstract
STUDY OBJECTIVES This study aimed to develop a multimodal deep learning (MMDL) system called HyMNet, integrating fundus images and cardiometabolic factors (age and sex) to enhance hypertension (HTN) detection. METHODS HyMNet employed RETFound, a model pretrained on 1.6 million retinal images, for the fundus data, in conjunction with a fully connected neural network for age and sex. The two pathways were jointly trained by joining their feature vectors into a fusion network. The system was trained on 5016 retinal images from 1243 individuals provided by the Saudi Ministry of National Guard Health Affairs. The influence of diabetes on HTN detection was also assessed. RESULTS HyMNet surpassed the unimodal system, achieving an F1 score of 0.771 compared to 0.745 for the unimodal model. For diabetic patients, the F1 score was 0.796, while it was 0.466 for non-diabetic patients. CONCLUSIONS HyMNet exhibited superior performance relative to unimodal approaches, with an F1 score of 0.771 for HyMNet compared to 0.752 for models trained on demographic data alone, underscoring the advantages of MMDL systems in HTN detection. The findings indicate that diabetes significantly impacts HTN prediction, enhancing detection accuracy among diabetic patients. Utilizing MMDL with diverse data sources could improve clinical applicability and generalization.
Collapse
Affiliation(s)
- Mohammed Baharoon
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
- Data Management Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia
| | - Hessa Almatar
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
| | - Reema Alduhayan
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
| | - Tariq Aldebasi
- Ophthalmology Department, King Abdulaziz Medical City, Ministry of National Guard Health Affairs, Riyadh 14611, Saudi Arabia;
| | - Badr Alahmadi
- Ophthalmology Department, Prince Mohammad bin Abdulaziz Hospital, Ministry of National Guard Health Affairs, Al Madinah 42324, Saudi Arabia;
| | - Yahya Bokhari
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
- College of Public Health and Health Informatics, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 14815, Saudi Arabia
| | - Mohammed Alawad
- National Center for Artificial Intelligence (NCAI), Saudi Data and Artificial Intelligence Authority (SDAIA), Riyadh 12382, Saudi Arabia;
| | - Ahmed Almazroa
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
| | - Abdulrhman Aljouie
- AI and Bioinformatics Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia; (M.B.); (H.A.); (R.A.); (Y.B.)
- Data Management Department, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 11481, Saudi Arabia
- College of Public Health and Health Informatics, King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Riyadh 14815, Saudi Arabia
| |
Collapse
|
5
|
Ghenciu LA, Dima M, Stoicescu ER, Iacob R, Boru C, Hațegan OA. Retinal Imaging-Based Oculomics: Artificial Intelligence as a Tool in the Diagnosis of Cardiovascular and Metabolic Diseases. Biomedicines 2024; 12:2150. [PMID: 39335664 PMCID: PMC11430496 DOI: 10.3390/biomedicines12092150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 09/19/2024] [Accepted: 09/21/2024] [Indexed: 09/30/2024] Open
Abstract
Cardiovascular diseases (CVDs) are a major cause of mortality globally, emphasizing the need for early detection and effective risk assessment to improve patient outcomes. Advances in oculomics, which utilize the relationship between retinal microvascular changes and systemic vascular health, offer a promising non-invasive approach to assessing CVD risk. Retinal fundus imaging and optical coherence tomography/angiography (OCT/OCTA) provides critical information for early diagnosis, with retinal vascular parameters such as vessel caliber, tortuosity, and branching patterns identified as key biomarkers. Given the large volume of data generated during routine eye exams, there is a growing need for automated tools to aid in diagnosis and risk prediction. The study demonstrates that AI-driven analysis of retinal images can accurately predict cardiovascular risk factors, cardiovascular events, and metabolic diseases, surpassing traditional diagnostic methods in some cases. These models achieved area under the curve (AUC) values ranging from 0.71 to 0.87, sensitivity between 71% and 89%, and specificity between 40% and 70%, surpassing traditional diagnostic methods in some cases. This approach highlights the potential of retinal imaging as a key component in personalized medicine, enabling more precise risk assessment and earlier intervention. It not only aids in detecting vascular abnormalities that may precede cardiovascular events but also offers a scalable, non-invasive, and cost-effective solution for widespread screening. However, the article also emphasizes the need for further research to standardize imaging protocols and validate the clinical utility of these biomarkers across different populations. By integrating oculomics into routine clinical practice, healthcare providers could significantly enhance early detection and management of systemic diseases, ultimately improving patient outcomes. Fundus image analysis thus represents a valuable tool in the future of precision medicine and cardiovascular health management.
Collapse
Affiliation(s)
- Laura Andreea Ghenciu
- Department of Functional Sciences, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Center for Translational Research and Systems Medicine, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Mirabela Dima
- Department of Neonatology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Emil Robert Stoicescu
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Department of Radiology and Medical Imaging, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
- Research Center for Pharmaco-Toxicological Evaluations, 'Victor Babes' University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square No. 2, 300041 Timisoara, Romania
| | - Roxana Iacob
- Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, Faculty of Mechanics, 'Politehnica' University Timisoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
- Department of Anatomy and Embriology, 'Victor Babes' University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Casiana Boru
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| | - Ovidiu Alin Hațegan
- Discipline of Anatomy and Embriology, Medicine Faculty, "Vasile Goldis" Western University of Arad, Revolution Boulevard 94, 310025 Arad, Romania
| |
Collapse
|
6
|
Ong J, Jang KJ, Baek SJ, Hu D, Lin V, Jang S, Thaler A, Sabbagh N, Saeed A, Kwon M, Kim JH, Lee S, Han YS, Zhao M, Sokolsky O, Lee I, Al-Aswad LA. Development of oculomics artificial intelligence for cardiovascular risk factors: A case study in fundus oculomics for HbA1c assessment and clinically relevant considerations for clinicians. Asia Pac J Ophthalmol (Phila) 2024; 13:100095. [PMID: 39209216 DOI: 10.1016/j.apjo.2024.100095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024] Open
Abstract
Artificial Intelligence (AI) is transforming healthcare, notably in ophthalmology, where its ability to interpret images and data can significantly enhance disease diagnosis and patient care. Recent developments in oculomics, the integration of ophthalmic features to develop biomarkers for systemic diseases, have demonstrated the potential for providing rapid, non-invasive methods of screening leading to enhance in early detection and improve healthcare quality, particularly in underserved areas. However, the widespread adoption of such AI-based technologies faces challenges primarily related to the trustworthiness of the system. We demonstrate the potential and considerations needed to develop trustworthy AI in oculomics through a pilot study for HbA1c assessment using an AI-based approach. We then discuss various challenges, considerations, and solutions that have been developed for powerful AI technologies in the past in healthcare and subsequently apply these considerations to the oculomics pilot study. Building upon the observations in the study we highlight the challenges and opportunities for advancing trustworthy AI in oculomics. Ultimately, oculomics presents as a powerful and emerging technology in ophthalmology and understanding how to optimize transparency prior to clinical adoption is of utmost importance.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI, United States
| | - Kuk Jin Jang
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Seung Ju Baek
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Dongyin Hu
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivian Lin
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Sooyong Jang
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Alexandra Thaler
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Nouran Sabbagh
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Almiqdad Saeed
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States; St John Eye Hospital-Jerusalem, Department of Ophthalmology, Israel
| | - Minwook Kwon
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Jin Hyun Kim
- Department of Intelligence and Communication Engineering, Gyeongsang National University, Republic of Korea
| | - Seongjin Lee
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Yong Seop Han
- Department of Ophthalmology, Gyeongsang National University College of Medicine, Institute of Health Sciences, Republic of Korea
| | - Mingmin Zhao
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Oleg Sokolsky
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Insup Lee
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States.
| | - Lama A Al-Aswad
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States; Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States.
| |
Collapse
|
7
|
Carrillo-Larco RM. Recognition of Patient Gender: A Machine Learning Preliminary Analysis Using Heart Sounds from Children and Adolescents. Pediatr Cardiol 2024:10.1007/s00246-024-03561-2. [PMID: 38937337 DOI: 10.1007/s00246-024-03561-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/19/2024] [Indexed: 06/29/2024]
Abstract
Research has shown that X-rays and fundus images can classify gender, age group, and race, raising concerns about bias and fairness in medical AI applications. However, the potential for physiological sounds to classify sociodemographic traits has not been investigated. Exploring this gap is crucial for understanding the implications and ensuring fairness in the field of medical sound analysis. We aimed to develop classifiers to determine gender (men/women) based on heart sound recordings and using machine learning (ML). Data-driven ML analysis. We utilized the open-access CirCor DigiScope Phonocardiogram Dataset obtained from cardiac screening programs in Brazil. Volunteers < 21 years of age. Each participant completed a questionnaire and underwent a clinical examination, including electronic auscultation at four cardiac points: aortic (AV), mitral (MV), pulmonary (PV), and tricuspid (TV). We used Mel-frequency cepstral coefficients (MFCCs) to develop the ML classifiers. From each patient and from each auscultation sound recording, we extracted 10 MFCCs. In sensitivity analysis, we additionally extracted 20, 30, 40, and 50 MFCCs. The most effective gender classifier was developed using PV recordings (AUC ROC = 70.3%). The second best came from MV recordings (AUC ROC = 58.8%). AV and TV recordings produced classifiers with an AUC ROC of 56.4% and 56.1%, respectively. Using more MFCCs did not substantially improve the classifiers. It is possible to classify between males and females using phonocardiogram data. As health-related audio recordings become more prominent in ML applications, research is required to explore if these recordings contain signals that could distinguish sociodemographic features.
Collapse
Affiliation(s)
- Rodrigo M Carrillo-Larco
- Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, GA, USA.
| |
Collapse
|
8
|
Patterson EJ, Bounds AD, Wagner SK, Kadri-Langford R, Taylor R, Daly D. Oculomics: A Crusade Against the Four Horsemen of Chronic Disease. Ophthalmol Ther 2024; 13:1427-1451. [PMID: 38630354 PMCID: PMC11109082 DOI: 10.1007/s40123-024-00942-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 03/25/2024] [Indexed: 05/22/2024] Open
Abstract
Chronic, non-communicable diseases present a major barrier to living a long and healthy life. In many cases, early diagnosis can facilitate prevention, monitoring, and treatment efforts, improving patient outcomes. There is therefore a critical need to make screening techniques as accessible, unintimidating, and cost-effective as possible. The association between ocular biomarkers and systemic health and disease (oculomics) presents an attractive opportunity for detection of systemic diseases, as ophthalmic techniques are often relatively low-cost, fast, and non-invasive. In this review, we highlight the key associations between structural biomarkers in the eye and the four globally leading causes of morbidity and mortality: cardiovascular disease, cancer, neurodegenerative disease, and metabolic disease. We observe that neurodegenerative disease is a particularly promising target for oculomics, with biomarkers detected in multiple ocular structures. Cardiovascular disease biomarkers are present in the choroid, retinal vasculature, and retinal nerve fiber layer, and metabolic disease biomarkers are present in the eyelid, tear fluid, lens, and retinal vasculature. In contrast, only the tear fluid emerged as a promising ocular target for the detection of cancer. The retina is a rich source of oculomics data, the analysis of which has been enhanced by artificial intelligence-based tools. Although not all biomarkers are disease-specific, limiting their current diagnostic utility, future oculomics research will likely benefit from combining data from various structures to improve specificity, as well as active design, development, and optimization of instruments that target specific disease signatures, thus facilitating differential diagnoses.
Collapse
Affiliation(s)
| | | | - Siegfried K Wagner
- Moorfields Eye Hospital NHS Trust, 162 City Road, London, EC1V 2PD, UK
- UCL Institute of Ophthalmology, University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | | | - Robin Taylor
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| | - Dan Daly
- Occuity, The Blade, Abbey Square, Reading, Berkshire, RG1 3BE, UK
| |
Collapse
|
9
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
10
|
Chikumba S, Hu Y, Luo J. Deep learning-based fundus image analysis for cardiovascular disease: a review. Ther Adv Chronic Dis 2023; 14:20406223231209895. [PMID: 38028950 PMCID: PMC10657535 DOI: 10.1177/20406223231209895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 10/03/2023] [Indexed: 12/01/2023] Open
Abstract
It is well established that the retina provides insights beyond the eye. Through observation of retinal microvascular changes, studies have shown that the retina contains information related to cardiovascular disease. Despite the tremendous efforts toward reducing the effects of cardiovascular diseases, they remain a global challenge and a significant public health concern. Conventionally, predicting the risk of cardiovascular disease involves the assessment of preclinical features, risk factors, or biomarkers. However, they are associated with cost implications, and tests to acquire predictive parameters are invasive. Artificial intelligence systems, particularly deep learning (DL) methods applied to fundus images have been generating significant interest as an adjunct assessment tool with the potential of enhancing efforts to prevent cardiovascular disease mortality. Risk factors such as age, gender, smoking status, hypertension, and diabetes can be predicted from fundus images using DL applications with comparable performance to human beings. A clinical change to incorporate DL systems for the analysis of fundus images as an equally good test over more expensive and invasive procedures may require conducting prospective clinical trials to mitigate all the possible ethical challenges and medicolegal implications. This review presents current evidence regarding the use of DL applications on fundus images to predict cardiovascular disease.
Collapse
Affiliation(s)
- Symon Chikumba
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
- Department of Optometry, Faculty of Healthy Sciences, Mzuzu University, Luwinga, Mzuzu, Malawi
| | - Yuqian Hu
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Jing Luo
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, 139 Middle Renmin RD, Changsha, Hunan, China
| |
Collapse
|
11
|
Hu W, Yii FSL, Chen R, Zhang X, Shang X, Kiburg K, Woods E, Vingrys A, Zhang L, Zhu Z, He M. A Systematic Review and Meta-Analysis of Applying Deep Learning in the Prediction of the Risk of Cardiovascular Diseases From Retinal Images. Transl Vis Sci Technol 2023; 12:14. [PMID: 37440249 PMCID: PMC10353749 DOI: 10.1167/tvst.12.7.14] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/08/2023] [Indexed: 07/14/2023] Open
Abstract
Purpose The purpose of this study was to perform a systematic review and meta-analysis to synthesize evidence from studies using deep learning (DL) to predict cardiovascular disease (CVD) risk from retinal images. Methods A systematic literature search was performed in MEDLINE, Scopus, and Web of Science up to June 2022. We extracted data pertaining to predicted outcomes, model development, and validation and model performance metrics. Included studies were graded using the Quality Assessment of Diagnostic Accuracies Studies 2 tool. Model performance was pooled across eligible studies using a random-effects meta-analysis model. Results A total of 26 studies were included in the analysis. There were 42 CVD risk-related outcomes predicted from retinal images were identified, including 33 CVD risk factors, 4 cardiac imaging biomarkers, 2 CVD risk scores, the presence of CVD, and incident CVD. Three studies that aimed to predict the development of future CVD events reported an area under the receiver operating curve (AUROC) between 0.68 and 0.81. Models that used retinal images as input data had a pooled mean absolute error of 3.19 years (95% confidence interval [CI] = 2.95-3.43) for age prediction; a pooled AUROC of 0.96 (95% CI = 0.95-0.97) for gender classification; a pooled AUROC of 0.80 (95% CI = 0.73-0.86) for diabetes detection; and a pooled AUROC of 0.86 (95% CI = 0.81-0.92) for the detection of chronic kidney disease. We observed a high level of heterogeneity and variation in study designs. Conclusions Although DL models appear to have reasonably good performance when it comes to predicting CVD risk, further work is necessary to evaluate the real-world applicability and predictive accuracy. Translational Relevance DL-based CVD risk assessment from retinal images holds great promise to be translated to clinical practice as a novel approach for CVD risk assessment, given its simple, quick, and noninvasive nature.
Collapse
Affiliation(s)
- Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Fabian S. L. Yii
- Centre for Clinical Brain Sciences, Edinburgh Medical School, University of Edinburgh, Edinburgh, UK
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, UK
| | - Ruiye Chen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Xinyu Zhang
- Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xianwen Shang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Katerina Kiburg
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Ekaterina Woods
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Algis Vingrys
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Lei Zhang
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
12
|
Chan YK, Cheng CY, Sabanayagam C. Eyes as the windows into cardiovascular disease in the era of big data. Taiwan J Ophthalmol 2023; 13:151-167. [PMID: 37484607 PMCID: PMC10361436 DOI: 10.4103/tjo.tjo-d-23-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/11/2023] [Indexed: 07/25/2023] Open
Abstract
Cardiovascular disease (CVD) is a major cause of mortality and morbidity worldwide and imposes significant socioeconomic burdens, especially with late diagnoses. There is growing evidence of strong correlations between ocular images, which are information-dense, and CVD progression. The accelerating development of deep learning algorithms (DLAs) is a promising avenue for research into CVD biomarker discovery, early CVD diagnosis, and CVD prognostication. We review a selection of 17 recent DLAs on the less-explored realm of DL as applied to ocular images to produce CVD outcomes, potential challenges in their clinical deployment, and the path forward. The evidence for CVD manifestations in ocular images is well documented. Most of the reviewed DLAs analyze retinal fundus photographs to predict CV risk factors, in particular hypertension. DLAs can predict age, sex, smoking status, alcohol status, body mass index, mortality, myocardial infarction, stroke, chronic kidney disease, and hematological disease with significant accuracy. While the cardio-oculomics intersection is now burgeoning, very much remain to be explored. The increasing availability of big data, computational power, technological literacy, and acceptance all prime this subfield for rapid growth. We pinpoint the specific areas of improvement toward ubiquitous clinical deployment: increased generalizability, external validation, and universal benchmarking. DLAs capable of predicting CVD outcomes from ocular inputs are of great interest and promise to individualized precision medicine and efficiency in the provision of health care with yet undetermined real-world efficacy with impactful initial results.
Collapse
Affiliation(s)
- Yarn Kit Chan
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
13
|
Wen J, Liu D, Wu Q, Zhao L, Iao WC, Lin H. Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
Affiliation(s)
- Jingyi Wen
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Dong Liu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Qianni Wu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Lanqin Zhao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Wai Cheng Iao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Haotian Lin
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics Zhongshan School of Medicine Sun Yat‐sen University Guangzhou China
| |
Collapse
|
14
|
Iao WC, Zhang W, Wang X, Wu Y, Lin D, Lin H. Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050900. [PMID: 36900043 PMCID: PMC10001234 DOI: 10.3390/diagnostics13050900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
Deep learning (DL) is the new high-profile technology in medical artificial intelligence (AI) for building screening and diagnosing algorithms for various diseases. The eye provides a window for observing neurovascular pathophysiological changes. Previous studies have proposed that ocular manifestations indicate systemic conditions, revealing a new route in disease screening and management. There have been multiple DL models developed for identifying systemic diseases based on ocular data. However, the methods and results varied immensely across studies. This systematic review aims to summarize the existing studies and provide an overview of the present and future aspects of DL-based algorithms for screening systemic diseases based on ophthalmic examinations. We performed a thorough search in PubMed®, Embase, and Web of Science for English-language articles published until August 2022. Among the 2873 articles collected, 62 were included for analysis and quality assessment. The selected studies mainly utilized eye appearance, retinal data, and eye movements as model input and covered a wide range of systemic diseases such as cardiovascular diseases, neurodegenerative diseases, and systemic health features. Despite the decent performance reported, most models lack disease specificity and public generalizability for real-world application. This review concludes the pros and cons and discusses the prospect of implementing AI based on ocular data in real-world clinical scenarios.
Collapse
Affiliation(s)
- Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Weixing Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510060, China
- Correspondence:
| |
Collapse
|
15
|
Lui G, Leung HS, Lee J, Wong CK, Li X, Ho M, Wong V, Li T, Ho T, Chan YY, Lee SS, Lee APW, Wong KT, Zee B. An efficient approach to estimate the risk of coronary artery disease for people living with HIV using machine-learning-based retinal image analysis. PLoS One 2023; 18:e0281701. [PMID: 36827291 PMCID: PMC9955663 DOI: 10.1371/journal.pone.0281701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
BACKGROUND People living with HIV (PLWH) have increased risks of non-communicable diseases, especially cardiovascular diseases. Current HIV clinical management guidelines recommend regular cardiovascular risk screening, but the risk equation models are not specific for PLWH. Better tools are needed to assess cardiovascular risk among PLWH accurately. METHODS We performed a prospective study to determine the performance of automatic retinal image analysis in assessing coronary artery disease (CAD) in PLWH. We enrolled PLWH with ≥1 cardiovascular risk factor. All participants had computerized tomography (CT) coronary angiogram and digital fundus photographs. The primary outcome was coronary atherosclerosis; secondary outcomes included obstructive CAD. In addition, we compared the performances of three models (traditional cardiovascular risk factors alone; retinal characteristics alone; and both traditional and retinal characteristics) by comparing the area under the curve (AUC) of receiver operating characteristic curves. RESULTS Among the 115 participants included in the analyses, with a mean age of 54 years, 89% were male, 95% had undetectable HIV RNA, 45% had hypertension, 40% had diabetes, 45% had dyslipidemia, and 55% had obesity, 71 (61.7%) had coronary atherosclerosis, and 23 (20.0%) had obstructive CAD. The machine-learning models, including retinal characteristics with and without traditional cardiovascular risk factors, had AUC of 0.987 and 0.979, respectively and had significantly better performance than the model including traditional cardiovascular risk factors alone (AUC 0.746) in assessing coronary artery disease atherosclerosis. The sensitivity and specificity for risk of coronary atherosclerosis in the combined model were 93.0% and 93.2%, respectively. For the assessment of obstructive CAD, models using retinal characteristics alone (AUC 0.986) or in combination with traditional risk factors (AUC 0.991) performed significantly better than traditional risk factors alone (AUC 0.777). The sensitivity and specificity for risk of obstructive CAD in the combined model were 95.7% and 97.8%, respectively. CONCLUSION In this cohort of Asian PLWH at risk of cardiovascular diseases, retinal characteristics, either alone or combined with traditional risk factors, had superior performance in assessing coronary atherosclerosis and obstructive CAD. SUMMARY People living with HIV in an Asian cohort with risk factors for cardiovascular disease had a high prevalence of coronary artery disease (CAD). A machine-learning-based retinal image analysis could increase the accuracy in assessing the risk of coronary atherosclerosis and obstructive CAD.
Collapse
Affiliation(s)
- Grace Lui
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
- Stanley Ho Centre for Emerging Infectious Diseases, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Ho Sang Leung
- Department of Imaging and Interventional Radiology, Prince of Wales Hospital, Shatin, Hong Kong SAR
| | - Jack Lee
- Centre for Clinical Research and Biostatistics, The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Chun Kwok Wong
- Department of Chemical Pathology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Xinxin Li
- Centre for Clinical Research and Biostatistics, The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Mary Ho
- Department of Ophthalmology, Prince of Wales Hospital, Shatin, Hong Kong SAR
| | - Vivian Wong
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Timothy Li
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Tracy Ho
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Yin Yan Chan
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Shui Shan Lee
- Stanley Ho Centre for Emerging Infectious Diseases, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Alex PW Lee
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
- Laboratory of Cardiac Imaging and 3D Printing, Li Ka Shing Institute of Health Science, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Ka Tak Wong
- Department of Imaging and Interventional Radiology, Prince of Wales Hospital, Shatin, Hong Kong SAR
| | - Benny Zee
- Centre for Clinical Research and Biostatistics, The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
- * E-mail:
| |
Collapse
|
16
|
Lee YC, Cha J, Shim I, Park WY, Kang SW, Lim DH, Won HH. Multimodal deep learning of fundus abnormalities and traditional risk factors for cardiovascular risk prediction. NPJ Digit Med 2023; 6:14. [PMID: 36732671 PMCID: PMC9894867 DOI: 10.1038/s41746-023-00748-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 01/06/2023] [Indexed: 02/04/2023] Open
Abstract
Cardiovascular disease (CVD), the leading cause of death globally, is associated with complicated underlying risk factors. We develop an artificial intelligence model to identify CVD using multimodal data, including clinical risk factors and fundus photographs from the Samsung Medical Center (SMC) for development and internal validation and from the UK Biobank for external validation. The multimodal model achieves an area under the receiver operating characteristic curve (AUROC) of 0.781 (95% confidence interval [CI] 0.766-0.798) in the SMC and 0.872 (95% CI 0.857-0.886) in the UK Biobank. We further observe a significant association between the incidence of CVD and the predicted risk from at-risk patients in the UK Biobank (hazard ratio [HR] 6.28, 95% CI 4.72-8.34). We visualize the importance of individual features in photography and traditional risk factors. The results highlight that non-invasive fundus photography can be a possible predictive marker for CVD.
Collapse
Affiliation(s)
- Yeong Chan Lee
- Department of Digital Health, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
- Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
| | - Jiho Cha
- Graduate School of Future Strategy, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Injeong Shim
- Department of Digital Health, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Woong-Yang Park
- Samsung Genome Institute, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Se Woong Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Dong Hui Lim
- Department of Digital Health, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea.
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - Hong-Hee Won
- Department of Digital Health, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea.
- Samsung Genome Institute, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
17
|
Tseng RMWW, Rim TH, Shantsila E, Yi JK, Park S, Kim SS, Lee CJ, Thakur S, Nusinovici S, Peng Q, Kim H, Lee G, Yu M, Tham YC, Bakhai A, Leeson P, Lip GYH, Wong TY, Cheng CY. Validation of a deep-learning-based retinal biomarker (Reti-CVD) in the prediction of cardiovascular disease: data from UK Biobank. BMC Med 2023; 21:28. [PMID: 36691041 PMCID: PMC9872417 DOI: 10.1186/s12916-022-02684-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/28/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Currently in the United Kingdom, cardiovascular disease (CVD) risk assessment is based on the QRISK3 score, in which 10% 10-year CVD risk indicates clinical intervention. However, this benchmark has limited efficacy in clinical practice and the need for a more simple, non-invasive risk stratification tool is necessary. Retinal photography is becoming increasingly acceptable as a non-invasive imaging tool for CVD. Previously, we developed a novel CVD risk stratification system based on retinal photographs predicting future CVD risk. This study aims to further validate our biomarker, Reti-CVD, (1) to detect risk group of ≥ 10% in 10-year CVD risk and (2) enhance risk assessment in individuals with QRISK3 of 7.5-10% (termed as borderline-QRISK3 group) using the UK Biobank. METHODS Reti-CVD scores were calculated and stratified into three risk groups based on optimized cut-off values from the UK Biobank. We used Cox proportional-hazards models to evaluate the ability of Reti-CVD to predict CVD events in the general population. C-statistics was used to assess the prognostic value of adding Reti-CVD to QRISK3 in borderline-QRISK3 group and three vulnerable subgroups. RESULTS Among 48,260 participants with no history of CVD, 6.3% had CVD events during the 11-year follow-up. Reti-CVD was associated with an increased risk of CVD (adjusted hazard ratio [HR] 1.41; 95% confidence interval [CI], 1.30-1.52) with a 13.1% (95% CI, 11.7-14.6%) 10-year CVD risk in Reti-CVD-high-risk group. The 10-year CVD risk of the borderline-QRISK3 group was greater than 10% in Reti-CVD-high-risk group (11.5% in non-statin cohort [n = 45,473], 11.5% in stage 1 hypertension cohort [n = 11,966], and 14.2% in middle-aged cohort [n = 38,941]). C statistics increased by 0.014 (0.010-0.017) in non-statin cohort, 0.013 (0.007-0.019) in stage 1 hypertension cohort, and 0.023 (0.018-0.029) in middle-aged cohort for CVD event prediction after adding Reti-CVD to QRISK3. CONCLUSIONS Reti-CVD has the potential to identify individuals with ≥ 10% 10-year CVD risk who are likely to benefit from earlier preventative CVD interventions. For borderline-QRISK3 individuals with 10-year CVD risk between 7.5 and 10%, Reti-CVD could be used as a risk enhancer tool to help improve discernment accuracy, especially in adult groups that may be pre-disposed to CVD.
Collapse
Affiliation(s)
- Rachel Marjorie Wei Wen Tseng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore.
- Mediwhale Inc., Seoul, South Korea.
| | - Eduard Shantsila
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool, UK
| | - Joseph K Yi
- Albert Einstein College of Medicine, New York, NY, USA
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sung Soo Kim
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Qingsheng Peng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Clinical and Translational Sciences Program, Duke-NUS Medical School, Singapore, Singapore
| | | | | | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ameet Bakhai
- Royal Free Hospital London NHS Foundation Trust, London, UK
- Cardiology Department, Barnet General Hospital, Thames House, Enfield, UK
| | - Paul Leeson
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool John Moores University and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom; and Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
18
|
Barriada RG, Masip D. An Overview of Deep-Learning-Based Methods for Cardiovascular Risk Assessment with Retinal Images. Diagnostics (Basel) 2022; 13:diagnostics13010068. [PMID: 36611360 PMCID: PMC9818382 DOI: 10.3390/diagnostics13010068] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/19/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early diagnosis of several systemic diseases. There is a large corpus of RFI systematically acquired for diagnosing eye-related diseases that could be used for CVDs prevention. Nevertheless, public health systems cannot afford to dedicate expert physicians to only deal with this data, posing the need for automated diagnosis tools that can raise alarms for patients at risk. Artificial Intelligence (AI) and, particularly, deep learning models, became a strong alternative to provide computerized pre-diagnosis for patient risk retrieval. This paper provides a novel review of the major achievements of the recent state-of-the-art DL approaches to automated CVDs diagnosis. This overview gathers commonly used datasets, pre-processing techniques, evaluation metrics and deep learning approaches used in 30 different studies. Based on the reviewed articles, this work proposes a classification taxonomy depending on the prediction target and summarizes future research challenges that have to be tackled to progress in this line.
Collapse
|
19
|
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J Clin Med 2022; 12:jcm12010152. [PMID: 36614953 PMCID: PMC9821402 DOI: 10.3390/jcm12010152] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/17/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.
Collapse
|
20
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
21
|
Wong DYL, Lam MC, Ran A, Cheung CY. Artificial intelligence in retinal imaging for cardiovascular disease prediction: current trends and future directions. Curr Opin Ophthalmol 2022; 33:440-446. [PMID: 35916571 DOI: 10.1097/icu.0000000000000886] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW Retinal microvasculature assessment has shown promise to enhance cardiovascular disease (CVD) risk stratification. Integrating artificial intelligence into retinal microvasculature analysis may increase the screening capacity of CVD risks compared with risk score calculation through blood-taking. This review summarizes recent advancements in artificial intelligence based retinal photograph analysis for CVD prediction, and suggests challenges and future prospects for translation into a clinical setting. RECENT FINDINGS Artificial intelligence based retinal microvasculature analyses potentially predict CVD risk factors (e.g. blood pressure, diabetes), direct CVD events (e.g. CVD mortality), retinal features (e.g. retinal vessel calibre) and CVD biomarkers (e.g. coronary artery calcium score). However, challenges such as handling photographs with concurrent retinal diseases, limited diverse data from other populations or clinical settings, insufficient interpretability and generalizability, concerns on cost-effectiveness and social acceptance may impede the dissemination of these artificial intelligence algorithms into clinical practice. SUMMARY Artificial intelligence based retinal microvasculature analysis may supplement existing CVD risk stratification approach. Although technical and socioeconomic challenges remain, we envision artificial intelligence based microvasculature analysis to have major clinical and research impacts in the future, through screening for high-risk individuals especially in less-developed areas and identifying new retinal biomarkers for CVD research.
Collapse
Affiliation(s)
- Dragon Y L Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | | |
Collapse
|
22
|
Al-Absi HRH, Islam MT, Refaee MA, Chowdhury MEH, Alam T. Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:4310. [PMID: 35746092 PMCID: PMC9228833 DOI: 10.3390/s22124310] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 05/17/2022] [Accepted: 05/17/2022] [Indexed: 05/08/2023]
Abstract
Cardiovascular diseases (CVD) are the leading cause of death worldwide. People affected by CVDs may go undiagnosed until the occurrence of a serious heart failure event such as stroke, heart attack, and myocardial infraction. In Qatar, there is a lack of studies focusing on CVD diagnosis based on non-invasive methods such as retinal image or dual-energy X-ray absorptiometry (DXA). In this study, we aimed at diagnosing CVD using a novel approach integrating information from retinal images and DXA data. We considered an adult Qatari cohort of 500 participants from Qatar Biobank (QBB) with an equal number of participants from the CVD and the control groups. We designed a case-control study with a novel multi-modal (combining data from multiple modalities-DXA and retinal images)-to propose a deep learning (DL)-based technique to distinguish the CVD group from the control group. Uni-modal models based on retinal images and DXA data achieved 75.6% and 77.4% accuracy, respectively. The multi-modal model showed an improved accuracy of 78.3% in classifying CVD group and the control group. We used gradient class activation map (GradCAM) to highlight the areas of interest in the retinal images that influenced the decisions of the proposed DL model most. It was observed that the model focused mostly on the centre of the retinal images where signs of CVD such as hemorrhages were present. This indicates that our model can identify and make use of certain prognosis markers for hypertension and ischemic heart disease. From DXA data, we found higher values for bone mineral density, fat content, muscle mass and bone area across majority of the body parts in CVD group compared to the control group indicating better bone health in the Qatari CVD cohort. This seminal method based on DXA scans and retinal images demonstrate major potentials for the early detection of CVD in a fast and relatively non-invasive manner.
Collapse
Affiliation(s)
- Hamada R. H. Al-Absi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar;
| | - Mohammad Tariqul Islam
- Computer Science Department, Southern Connecticut State University, New Haven, CT 06515, USA;
| | | | | | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar;
| |
Collapse
|
23
|
Betzler BK, Rim TH, Sabanayagam C, Cheng CY. Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging. Front Digit Health 2022; 4:889445. [PMID: 35706971 PMCID: PMC9190759 DOI: 10.3389/fdgth.2022.889445] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial Intelligence (AI) analytics has been used to predict, classify, and aid clinical management of multiple eye diseases. Its robust performances have prompted researchers to expand the use of AI into predicting systemic, non-ocular diseases and parameters based on ocular images. Herein, we discuss the reasons why the eye is well-suited for systemic applications, and review the applications of deep learning on ophthalmic images in the prediction of demographic parameters, body composition factors, and diseases of the cardiovascular, hematological, neurodegenerative, metabolic, renal, and hepatobiliary systems. Three main imaging modalities are included—retinal fundus photographs, optical coherence tomographs and external ophthalmic images. We examine the range of systemic factors studied from ophthalmic imaging in current literature and discuss areas of future research, while acknowledging current limitations of AI systems based on ophthalmic images.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
24
|
Khan A, De Boever P, Gerrits N, Akhtar N, Saqqur M, Ponirakis G, Gad H, Petropoulos IN, Shuaib A, Faber JE, Kamran S, Malik RA. Retinal vessel multifractals predict pial collateral status in patients with acute ischemic stroke. PLoS One 2022; 17:e0267837. [PMID: 35511879 PMCID: PMC9070887 DOI: 10.1371/journal.pone.0267837] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Accepted: 04/16/2022] [Indexed: 01/26/2023] Open
Abstract
OBJECTIVES Pial collateral blood flow is a major determinant of the outcomes of acute ischemic stroke. This study was undertaken to determine whether retinal vessel metrics can predict the pial collateral status and stroke outcomes in patients. METHODS Thirty-five patients with acute stroke secondary to middle cerebral artery (MCA) occlusion underwent grading of their pial collateral status from computed tomography angiography and retinal vessel analysis from retinal fundus images. RESULTS The NIHSS (14.7 ± 5.5 vs 10.1 ± 5.8, p = 0.026) and mRS (2.9 ± 1.6 vs 1.9 ± 1.3, p = 0.048) scores were higher at admission in patients with poor compared to good pial collaterals. Retinal vessel multifractals: D0 (1.673±0.028vs1.652±0.025, p = 0.028), D1 (1.609±0.027vs1.590±0.025, p = 0.044) and f(α)max (1.674±0.027vs1.652±0.024, p = 0.019) were higher in patients with poor compared to good pial collaterals. Furthermore, support vector machine learning achieved a fair sensitivity (0.743) and specificity (0.707) for differentiating patients with poor from good pial collaterals. Age (p = 0.702), BMI (p = 0.422), total cholesterol (p = 0.842), triglycerides (p = 0.673), LDL (p = 0.952), HDL (p = 0.366), systolic blood pressure (p = 0.727), HbA1c (p = 0.261) and standard retinal metrics including CRAE (p = 0.084), CRVE (p = 0.946), AVR (p = 0.148), tortuosity index (p = 0.790), monofractal Df (p = 0.576), lacunarity (p = 0.531), curve asymmetry (p = 0.679) and singularity length (p = 0.937) did not differ between patients with poor compared to good pial collaterals. CONCLUSIONS This is the first translational study to show increased retinal vessel multifractal dimensions in patients with acute ischemic stroke and poor pial collaterals. A retinal vessel classifier was developed to differentiate between patients with poor and good pial collaterals and may allow rapid non-invasive identification of patients with poor pial collaterals.
Collapse
Affiliation(s)
- Adnan Khan
- Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Patrick De Boever
- Department of Biology, University of Antwerp, Antwerp, Wilrijk, Belgium
- Center of Environmental Sciences, Hasselt University, Diepenbeek, Belgium
- VITO (Flemish Institute for Technological Research), Health Unit, Mol, Belgium
| | - Nele Gerrits
- VITO (Flemish Institute for Technological Research), Health Unit, Mol, Belgium
| | - Naveed Akhtar
- Institute of Neuroscience, Hamad Medical Corporation, Doha, Qatar
| | - Maher Saqqur
- Trillium Hospital, University of Toronto at Mississauga, Mississauga, ON, Canada
- Department of Medicine, University of Alberta, Edmonton, Canada
| | | | - Hoda Gad
- Weill Cornell Medicine-Qatar, Doha, Qatar
| | | | - Ashfaq Shuaib
- Institute of Neuroscience, Hamad Medical Corporation, Doha, Qatar
- Department of Medicine, University of Alberta, Edmonton, Canada
| | - James E. Faber
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
| | - Saadat Kamran
- Institute of Neuroscience, Hamad Medical Corporation, Doha, Qatar
| | | |
Collapse
|
25
|
Yun JS, Kim J, Jung SH, Cha SA, Ko SH, Ahn YB, Won HH, Sohn KA, Kim D. A deep learning model for screening type 2 diabetes from retinal photographs. Nutr Metab Cardiovasc Dis 2022; 32:1218-1226. [PMID: 35197214 PMCID: PMC9018521 DOI: 10.1016/j.numecd.2022.01.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/13/2021] [Accepted: 01/08/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND AIMS We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. METHODS AND RESULTS The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. CONCLUSION Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.
Collapse
Affiliation(s)
- Jae-Seung Yun
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jaesik Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Sang-Hyuk Jung
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA; Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Seon-Ah Cha
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Hyun Ko
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yu-Bae Ahn
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Hong-Hee Won
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Kyung-Ah Sohn
- Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea.
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
26
|
Corbin D, Lesage F. Assessment of the predictive potential of cognitive scores from retinal images and retinal fundus metadata via deep learning using the CLSA database. Sci Rep 2022; 12:5767. [PMID: 35388080 PMCID: PMC8986784 DOI: 10.1038/s41598-022-09719-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 03/25/2022] [Indexed: 01/19/2023] Open
Abstract
Accumulation of beta-amyloid in the brain and cognitive decline are considered hallmarks of Alzheimer’s disease. Knowing from previous studies that these two factors can manifest in the retina, the aim was to investigate whether a deep learning method was able to predict the cognition of an individual from a RGB image of his retina and metadata. A deep learning model, EfficientNet, was used to predict cognitive scores from the Canadian Longitudinal Study on Aging (CLSA) database. The proposed model explained 22.4% of the variance in cognitive scores on the test dataset using fundus images and metadata. Metadata alone proved to be more effective in explaining the variance in the sample (20.4%) versus fundus images (9.3%) alone. Attention maps highlighted the optic nerve head as the most influential feature in predicting cognitive scores. The results demonstrate that RGB fundus images are limited in predicting cognition.
Collapse
Affiliation(s)
- Denis Corbin
- Laboratoire d'Imagerie optique et Moléculaire, Polytechnique Montréal, 2500 Chemin de Polytechnique Montréal, Montreal, QC, H3T 1J4, Canada.
| | - Frédéric Lesage
- Laboratoire d'Imagerie optique et Moléculaire, Polytechnique Montréal, 2500 Chemin de Polytechnique Montréal, Montreal, QC, H3T 1J4, Canada.,Institut de Cardiologie de Montréal, 5000 Rue Bélanger, Montreal, QC, H1T 1C8, Canada
| |
Collapse
|
27
|
Arnould L, Guenancia C, Bourredjem A, Binquet C, Gabrielle PH, Eid P, Baudin F, Kawasaki R, Cottin Y, Creuzot-Garcher C, Jacquir S. Prediction of Cardiovascular Parameters With Supervised Machine Learning From Singapore "I" Vessel Assessment and OCT-Angiography: A Pilot Study. Transl Vis Sci Technol 2021; 10:20. [PMID: 34767626 PMCID: PMC8590163 DOI: 10.1167/tvst.10.13.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Purpose Assessment of cardiovascular risk is the keystone of prevention in cardiovascular disease. The objective of this pilot study was to estimate the cardiovascular risk score (American Hospital Association [AHA] risk score, Syntax risk, and SCORE risk score) with machine learning (ML) model based on retinal vascular quantitative parameters. Methods We proposed supervised ML algorithm to predict cardiovascular parameters in patients with cardiovascular diseases treated in Dijon University Hospital using quantitative retinal vascular characteristics measured with fundus photography and optical coherence tomography – angiography (OCT-A) scans (alone and combined). To describe retinal microvascular network, we used the Singapore “I” Vessel Assessment (SIVA), which extracts vessel parameters from fundus photography and quantitative OCT-A retinal metrics of superficial retinal capillary plexus. Results The retinal and cardiovascular data of 144 patients were included. This paper presented a high prediction rate of the cardiovascular risk score. By means of the Naïve Bayes algorithm and SIVA + OCT-A data, the AHA risk score was predicted with 81.25% accuracy, the SCORE risk with 75.64% accuracy, and the Syntax score with 96.53% of accuracy. Conclusions Performance of these algorithms demonstrated in this preliminary study that ML algorithms applied to quantitative retinal vascular parameters with SIVA software and OCT-A were able to predict cardiovascular scores with a robust rate. Quantitative retinal vascular biomarkers with the ML strategy might provide valuable data to implement predictive model for cardiovascular parameters. Translational Relevance Small data set of quantitative retinal vascular parameters with fundus and with OCT-A can be used with ML learning to predict cardiovascular parameters.
Collapse
Affiliation(s)
- Louis Arnould
- Ophthalmology Department, University Hospital, Dijon, France.,INSERM, CIC1432, Clinical Epidemiology Unit, Dijon, France; Dijon University Hospital, Clinical Investigation Center, Clinical Epidemiology/Clinical Trials Unit, Dijon, France.,Centre des Sciences du Gout et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Charles Guenancia
- Cardiology Department, University Hospital, Dijon, France.,PEC 2, University Hospital, Dijon, France
| | - Abderrahmane Bourredjem
- INSERM, CIC1432, Clinical Epidemiology Unit, Dijon, France; Dijon University Hospital, Clinical Investigation Center, Clinical Epidemiology/Clinical Trials Unit, Dijon, France
| | - Christine Binquet
- INSERM, CIC1432, Clinical Epidemiology Unit, Dijon, France; Dijon University Hospital, Clinical Investigation Center, Clinical Epidemiology/Clinical Trials Unit, Dijon, France
| | - Pierre-Henry Gabrielle
- Ophthalmology Department, University Hospital, Dijon, France.,Centre des Sciences du Gout et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Pétra Eid
- Ophthalmology Department, University Hospital, Dijon, France
| | - Florian Baudin
- Ophthalmology Department, University Hospital, Dijon, France
| | - Ryo Kawasaki
- Department of Vision Informatics, Osaka University Graduate School of Medicine, Suita, Japan
| | - Yves Cottin
- Cardiology Department, University Hospital, Dijon, France.,PEC 2, University Hospital, Dijon, France
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, University Hospital, Dijon, France.,Centre des Sciences du Gout et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Sabir Jacquir
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, Gif-sur-Yvette, France
| |
Collapse
|
28
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
29
|
Abstract
PURPOSE OF REVIEW Systemic retinal biomarkers are biomarkers identified in the retina and related to evaluation and management of systemic disease. This review summarizes the background, categories and key findings from this body of research as well as potential applications to clinical care. RECENT FINDINGS Potential systemic retinal biomarkers for cardiovascular disease, kidney disease and neurodegenerative disease were identified using regression analysis as well as more sophisticated image processing techniques. Deep learning techniques were used in a number of studies predicting diseases including anaemia and chronic kidney disease. A virtual coronary artery calcium score performed well against other competing traditional models of event prediction. SUMMARY Systemic retinal biomarker research has progressed rapidly using regression studies with clearly identified biomarkers such as retinal microvascular patterns, as well as using deep learning models. Future systemic retinal biomarker research may be able to boost performance using larger data sets, the addition of meta-data and higher resolution image inputs.
Collapse
|
30
|
Betzler BK, Yang HHS, Thakur S, Yu M, Quek TC, Soh ZD, Lee G, Tham YC, Wong TY, Rim TH, Cheng CY. Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study. JMIR Med Inform 2021; 9:e25165. [PMID: 34402800 PMCID: PMC8408758 DOI: 10.2196/25165] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 04/06/2021] [Accepted: 06/22/2021] [Indexed: 11/26/2022] Open
Abstract
Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms’ performance in predicting gender based on different fields of fundus photographs (optic disc–centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc–centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc–centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc–centered images. Algorithms’ performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms’ performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Henrik Hee Seung Yang
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore, Singapore
| | - Marco Yu
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Yih-Chung Tham
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tien Yin Wong
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Ching-Yu Cheng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.,Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| |
Collapse
|
31
|
Assessment of patient specific information in the wild on fundus photography and optical coherence tomography. Sci Rep 2021; 11:8621. [PMID: 33883573 PMCID: PMC8060417 DOI: 10.1038/s41598-021-86577-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 03/09/2021] [Indexed: 12/25/2022] Open
Abstract
In this paper we analyse the performance of machine learning methods in predicting patient information such as age or sex solely from retinal imaging modalities in a heterogeneous clinical population. Our dataset consists of N = 135,667 fundus images and N = 85,536 volumetric OCT scans. Deep learning models were trained to predict the patient’s age and sex from fundus images, OCT cross sections and OCT volumes. For sex prediction, a ROC AUC of 0.80 was achieved for fundus images, 0.84 for OCT cross sections and 0.90 for OCT volumes. Age prediction mean absolute errors of 6.328 years for fundus, 5.625 years for OCT cross sections and 4.541 for OCT volumes were observed. We assess the performance of OCT scans containing different biomarkers and note a peak performance of AUC = 0.88 for OCT cross sections and 0.95 for volumes when there is no pathology on scans. Performance drops in case of drusen, fibrovascular pigment epitheliuum detachment and geographic atrophy present. We conclude that deep learning based methods are capable of classifying the patient’s sex and age from color fundus photography and OCT for a broad spectrum of patients irrespective of underlying disease or image quality. Non-random sex prediction using fundus images seems only possible if the eye fovea and optic disc are visible.
Collapse
|