1
|
Shi D, Zhou Y, He S, Wagner SK, Huang Y, Keane PA, Ting DS, Zhang L, Zheng Y, He M. Cross-modality Labeling Enables Noninvasive Capillary Quantification as a Sensitive Biomarker for Assessing Cardiovascular Risk. OPHTHALMOLOGY SCIENCE 2024; 4:100441. [PMID: 38420613 PMCID: PMC10899028 DOI: 10.1016/j.xops.2023.100441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/26/2023] [Accepted: 11/27/2023] [Indexed: 03/02/2024]
Abstract
Purpose We aim to use fundus fluorescein angiography (FFA) to label the capillaries on color fundus (CF) photographs and train a deep learning model to quantify retinal capillaries noninvasively from CF and apply it to cardiovascular disease (CVD) risk assessment. Design Cross-sectional and longitudinal study. Participants A total of 90732 pairs of CF-FFA images from 3893 participants for segmentation model development, and 49229 participants in the UK Biobank for association analysis. Methods We matched the vessels extracted from FFA and CF, and used vessels from FFA as labels to train a deep learning model (RMHAS-FA) to segment retinal capillaries using CF. We tested the model's accuracy on a manually labeled internal test set (FundusCapi). For external validation, we tested the segmentation model on 7 vessel segmentation datasets, and investigated the clinical value of the segmented vessels in predicting CVD events in the UK Biobank. Main Outcome Measures Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity for segmentation. Hazard ratio (HR; 95% confidence interval [CI]) for Cox regression analysis. Results On the FundusCapi dataset, the segmentation performance was AUC = 0.95, accuracy = 0.94, sensitivity = 0.90, and specificity = 0.93. Smaller vessel skeleton density had a stronger correlation with CVD risk factors and incidence (P < 0.01). Reduced density of small vessel skeletons was strongly associated with an increased risk of CVD incidence and mortality for women (HR [95% CI] = 0.91 [0.84-0.98] and 0.68 [0.54-0.86], respectively). Conclusions Using paired CF-FFA images, we automated the laborious manual labeling process and enabled noninvasive capillary quantification from CF, supporting its potential as a sensitive screening method for identifying individuals at high risk of future CVD events. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Yu Huang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Pearse A. Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Daniel S.W. Ting
- Singapore National Eye Center, Singapore Eye Research Institute, and Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Lei Zhang
- Faculty of Medicine, Central Clinical School, Monash University, Melbourne, Victoria, Australia
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| |
Collapse
|
2
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
3
|
He S, Joseph S, Bulloch G, Jiang F, Kasturibai H, Kim R, Ravilla TD, Wang Y, Shi D, He M. Bridging the Camera Domain Gap With Image-to-Image Translation Improves Glaucoma Diagnosis. Transl Vis Sci Technol 2023; 12:20. [PMID: 38133514 PMCID: PMC10746931 DOI: 10.1167/tvst.12.12.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 09/15/2023] [Indexed: 12/23/2023] Open
Abstract
Purpose The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images. Methods We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs. Results The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from -0.09 to 0.15 and -0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation. Conclusions Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras. Translational Relevance Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.
Collapse
Affiliation(s)
- Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Sanil Joseph
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
- Lions Aravind Institute of Community Ophthalmology, Aravind Eye Care System, Madurai, India
| | - Gabriella Bulloch
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Feng Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | | - Ramasamy Kim
- Aravind Eye Hospital and Post Graduate Institute, Madurai, India
| | - Thulasiraj D. Ravilla
- Lions Aravind Institute of Community Ophthalmology, Aravind Eye Care System, Madurai, India
| | - Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Aravind Eye Hospital and Post Graduate Institute, Madurai, India
| |
Collapse
|
4
|
Song F, Zhang W, Zheng Y, Shi D, He M. A deep learning model for generating fundus autofluorescence images from color fundus photography. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2023; 3:192-198. [PMID: 38059165 PMCID: PMC10696390 DOI: 10.1016/j.aopr.2023.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 11/04/2023] [Accepted: 11/05/2023] [Indexed: 12/08/2023]
Abstract
Background Fundus Autofluorescence (FAF) is a valuable imaging technique used to assess metabolic alterations in the retinal pigment epithelium (RPE) associated with various age-related and disease-related changes. The practical uses of FAF are ever-growing. This study aimed to evaluate the effectiveness of a generative deep learning (DL) model in translating color fundus (CF) images into synthetic FAF images and explore its potential for enhancing screening of age-related macular degeneration (AMD). Methods A generative adversarial network (GAN) model was trained on pairs of CF and FAF images to generate synthetic FAF images. The quality of synthesized FAF images was assessed objectively by common generation metrics. Additionally, the clinical effectiveness of the generated FAF images in AMD classification was evaluated by measuring the area under the curve (AUC), using the LabelMe dataset. Results A total of 8410 FAF images from 2586 patients were analyzed. The synthesized FAF images exhibited an impressive objectively assessed quality, achieving a multi-scale structural similarity index (MS-SSIM) of 0.67. When evaluated on the LabelMe dataset, the combination of generated FAF images and CF images resulted in a noteworthy improvement in AMD classification accuracy, with the AUC increasing from 0.931 to 0.968. Conclusions This study presents the first attempt to use a generative deep learning model to create authentic and high-quality FAF images from CF images. The incorporation of the translated FAF images on top of CF images improved the accuracy of AMD classification. Overall, this study presents a promising approach to enhance large-scale AMD screening.
Collapse
Affiliation(s)
- Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|