1
|
Elsawy A, Keenan TDL, Chen Q, Thavikulwat AT, Bhandari S, Quek TC, Goh JHL, Tham YC, Cheng CY, Chew EY, Lu Z. A deep network DeepOpacityNet for detection of cataracts from color fundus photographs. Commun Med (Lond) 2023; 3:184. [PMID: 38104223 PMCID: PMC10725427 DOI: 10.1038/s43856-023-00410-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/21/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).
Collapse
Affiliation(s)
- Amr Elsawy
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Tiarnan D L Keenan
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA.
| |
Collapse
|
2
|
Pushpanathan K, Lim ZW, Er Yew SM, Chen DZ, Hui'En Lin HA, Lin Goh JH, Wong WM, Wang X, Jin Tan MC, Chang Koh VT, Tham YC. Popular large language model chatbots' accuracy, comprehensiveness, and self-awareness in answering ocular symptom queries. iScience 2023; 26:108163. [PMID: 37915603 PMCID: PMC10616302 DOI: 10.1016/j.isci.2023.108163] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/19/2023] [Accepted: 10/05/2023] [Indexed: 11/03/2023] Open
Abstract
In light of growing interest in using emerging large language models (LLMs) for self-diagnosis, we systematically assessed the performance of ChatGPT-3.5, ChatGPT-4.0, and Google Bard in delivering proficient responses to 37 common inquiries regarding ocular symptoms. Responses were masked, randomly shuffled, and then graded by three consultant-level ophthalmologists for accuracy (poor, borderline, good) and comprehensiveness. Additionally, we evaluated the self-awareness capabilities (ability to self-check and self-correct) of the LLM-Chatbots. 89.2% of ChatGPT-4.0 responses were 'good'-rated, outperforming ChatGPT-3.5 (59.5%) and Google Bard (40.5%) significantly (all p < 0.001). All three LLM-Chatbots showed optimal mean comprehensiveness scores as well (ranging from 4.6 to 4.7 out of 5). However, they exhibited subpar to moderate self-awareness capabilities. Our study underscores the potential of ChatGPT-4.0 in delivering accurate and comprehensive responses to ocular symptom inquiries. Future rigorous validation of their performance is crucial to ensure their reliability and appropriateness for actual clinical use.
Collapse
Affiliation(s)
- Krithi Pushpanathan
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Samantha Min Er Yew
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - David Ziyou Chen
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Hazel Anne Hui'En Lin
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Wendy Meihua Wong
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Xiaofei Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing, China
- Advanced Innovation Centre for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Marcus Chun Jin Tan
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Victor Teck Chang Koh
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Yih-Chung Tham
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme (Eye ACP), Duke NUS Medical School, Singapore, Singapore
| |
Collapse
|
3
|
Lim ZW, Pushpanathan K, Yew SME, Lai Y, Sun CH, Lam JSH, Chen DZ, Goh JHL, Tan MCJ, Sheng B, Cheng CY, Koh VTC, Tham YC. Benchmarking large language models' performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard. EBioMedicine 2023; 95:104770. [PMID: 37625267 PMCID: PMC10470220 DOI: 10.1016/j.ebiom.2023.104770] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/21/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Large language models (LLMs) are garnering wide interest due to their human-like and contextually relevant responses. However, LLMs' accuracy across specific medical domains has yet been thoroughly evaluated. Myopia is a frequent topic which patients and parents commonly seek information online. Our study evaluated the performance of three LLMs namely ChatGPT-3.5, ChatGPT-4.0, and Google Bard, in delivering accurate responses to common myopia-related queries. METHODS We curated thirty-one commonly asked myopia care-related questions, which were categorised into six domains-pathogenesis, risk factors, clinical presentation, diagnosis, treatment and prevention, and prognosis. Each question was posed to the LLMs, and their responses were independently graded by three consultant-level paediatric ophthalmologists on a three-point accuracy scale (poor, borderline, good). A majority consensus approach was used to determine the final rating for each response. 'Good' rated responses were further evaluated for comprehensiveness on a five-point scale. Conversely, 'poor' rated responses were further prompted for self-correction and then re-evaluated for accuracy. FINDINGS ChatGPT-4.0 demonstrated superior accuracy, with 80.6% of responses rated as 'good', compared to 61.3% in ChatGPT-3.5 and 54.8% in Google Bard (Pearson's chi-squared test, all p ≤ 0.009). All three LLM-Chatbots showed high mean comprehensiveness scores (Google Bard: 4.35; ChatGPT-4.0: 4.23; ChatGPT-3.5: 4.11, out of a maximum score of 5). All LLM-Chatbots also demonstrated substantial self-correction capabilities: 66.7% (2 in 3) of ChatGPT-4.0's, 40% (2 in 5) of ChatGPT-3.5's, and 60% (3 in 5) of Google Bard's responses improved after self-correction. The LLM-Chatbots performed consistently across domains, except for 'treatment and prevention'. However, ChatGPT-4.0 still performed superiorly in this domain, receiving 70% 'good' ratings, compared to 40% in ChatGPT-3.5 and 45% in Google Bard (Pearson's chi-squared test, all p ≤ 0.001). INTERPRETATION Our findings underscore the potential of LLMs, particularly ChatGPT-4.0, for delivering accurate and comprehensive responses to myopia-related queries. Continuous strategies and evaluations to improve LLMs' accuracy remain crucial. FUNDING Dr Yih-Chung Tham was supported by the National Medical Research Council of Singapore (NMRC/MOH/HCSAINV21nov-0001).
Collapse
Affiliation(s)
- Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Krithi Pushpanathan
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Samantha Min Er Yew
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Yien Lai
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | - Chen-Hsin Sun
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | - Janice Sing Harn Lam
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | - David Ziyou Chen
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | | | - Marcus Chun Jin Tan
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China; Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China; MoE Key Lab of Artificial Intelligence, Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Ching-Yu Cheng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Eye Academic Clinical Program (Eye ACP), Duke NUS Medical School, Singapore
| | - Victor Teck Chang Koh
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Department of Ophthalmology, National University Hospital, Singapore
| | - Yih-Chung Tham
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Eye Academic Clinical Program (Eye ACP), Duke NUS Medical School, Singapore.
| |
Collapse
|
4
|
Fang X, Deshmukh M, Chee ML, Soh ZD, Teo ZL, Thakur S, Goh JHL, Liu YC, Husain R, Mehta JS, Wong TY, Cheng CY, Rim TH, Tham YC. Deep learning algorithms for automatic detection of pterygium using anterior segment photographs from slit-lamp and hand-held cameras. Br J Ophthalmol 2022; 106:1642-1647. [PMID: 34244208 PMCID: PMC9685734 DOI: 10.1136/bjophthalmol-2021-318866] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 06/25/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To evaluate the performances of deep learning (DL) algorithms for detection of presence and extent pterygium, based on colour anterior segment photographs (ASPs) taken from slit-lamp and hand-held cameras. METHODS Referable pterygium was defined as having extension towards the cornea from the limbus of >2.50 mm or base width at the limbus of >5.00 mm. 2503 images from the Singapore Epidemiology of Eye Diseases (SEED) study were used as the development set. Algorithms were validated on an internal set from the SEED cohort (629 images (55.3% pterygium, 8.4% referable pterygium)), and tested on two external clinic-based sets (set 1 with 2610 images (2.8% pterygium, 0.7% referable pterygium, from slit-lamp ASP); and set 2 with 3701 images, 2.5% pterygium, 0.9% referable pterygium, from hand-held ASP). RESULTS The algorithm's area under the receiver operating characteristic curve (AUROC) for detection of any pterygium was 99.5%(sensitivity=98.6%; specificity=99.0%) in internal test set, 99.1% (sensitivity=95.9%, specificity=98.5%) in external test set 1 and 99.7% (sensitivity=100.0%; specificity=88.3%) in external test set 2. For referable pterygium, the algorithm's AUROC was 98.5% (sensitivity=94.0%; specificity=95.3%) in internal test set, 99.7% (sensitivity=87.2%; specificity=99.4%) in external set 1 and 99.0% (sensitivity=94.3%; specificity=98.0%) in external set 2. CONCLUSION DL algorithms based on ASPs can detect presence of and referable-level pterygium with optimal sensitivity and specificity. These algorithms, particularly if used with a handheld camera, may potentially be used as a simple screening tool for detection of referable pterygium. Further validation in community setting is warranted. SYNOPSIS/PRECIS DL algorithms based on ASPs can detect presence of and referable-level pterygium optimally, and may be used as a simple screening tool for the detection of referable pterygium in community screenings.
Collapse
Affiliation(s)
- Xiaoling Fang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Department of Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China
| | - Mihir Deshmukh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Miao Li Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Zhi-Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | - Yu-Chi Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Jodhbir S Mehta
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore .,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
5
|
Tham YC, Goh JHL, Anees A, Lei X, Rim TH, Chee ML, Wang YX, Jonas JB, Thakur S, Teo ZL, Cheung N, Hamzah H, Tan GSW, Husain R, Sabanayagam C, Wang JJ, Chen Q, Lu Z, Keenan TD, Chew EY, Tan AG, Mitchell P, Goh RSM, Xu X, Liu Y, Wong TY, Cheng CY. Author Correction: Detecting visually significant cataract using retinal photograph-based deep learning. Nat Aging 2022; 2:562. [PMID: 37118457 PMCID: PMC10154230 DOI: 10.1038/s43587-022-00245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/30/2023]
Affiliation(s)
- Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ayesha Anees
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Miao-Li Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karis-University Heidelberg, Mannheim, Germany
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ning Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Tiarnan D Keenan
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Emily Y Chew
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Ava Grace Tan
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
- National Health Medical Research Council Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Paul Mitchell
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
| | - Rick S M Goh
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Yong Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
6
|
Tham YC, Goh JHL, Anees A, Lei X, Rim TH, Chee ML, Wang YX, Jonas JB, Thakur S, Teo ZL, Cheung N, Hamzah H, Tan GSW, Husain R, Sabanayagam C, Wang JJ, Chen Q, Lu Z, Keenan TD, Chew EY, Tan AG, Mitchell P, Goh RSM, Xu X, Liu Y, Wong TY, Cheng CY. Detecting visually significant cataract using retinal photograph-based deep learning. Nat Aging 2022; 2:264-271. [PMID: 37118370 PMCID: PMC10154193 DOI: 10.1038/s43587-022-00171-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 01/10/2022] [Indexed: 02/06/2023]
Abstract
Age-related cataracts are the leading cause of visual impairment among older adults. Many significant cases remain undiagnosed or neglected in communities, due to limited availability or accessibility to cataract screening. In the present study, we report the development and validation of a retinal photograph-based, deep-learning algorithm for automated detection of visually significant cataracts, using more than 25,000 images from population-based studies. In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 96.6%. External testing performed across three studies showed AUROCs of 91.6-96.5%. In a separate test set of 186 eyes, we further compared the algorithm's performance with 4 ophthalmologists' evaluations. The algorithm performed comparably, if not being slightly more superior (sensitivity of 93.3% versus 51.7-96.6% by ophthalmologists and specificity of 99.0% versus 90.7-97.9% by ophthalmologists). Our findings show the potential of a retinal photograph-based screening tool for visually significant cataracts among older adults, providing more appropriate referrals to tertiary eye centers.
Collapse
Affiliation(s)
- Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ayesha Anees
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Miao-Li Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karis-University Heidelberg, Mannheim, Germany
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ning Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Tiarnan D Keenan
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Emily Y Chew
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Ava Grace Tan
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
- National Health Medical Research Council Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Paul Mitchell
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, University of Sydney, Westmead Hospital, Westmead, New South Wales, Australia
| | - Rick S M Goh
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Yong Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
7
|
Keenan TDL, Chen Q, Agrón E, Tham YC, Lin Goh JH, Lei X, Ng YP, Liu Y, Xu X, Cheng CY, Bikbov MM, Jonas JB, Bhandari S, Broadhead GK, Colyer MH, Corsini J, Cousineau-Krieger C, Gensheimer W, Grasic D, Lamba T, Magone MT, Maiberger M, Oshinsky A, Purt B, Shin SY, Thavikulwat AT, Lu Z, Chew EY. Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022; 129:571-584. [PMID: 34990643 PMCID: PMC9038670 DOI: 10.1016/j.ophtha.2021.12.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/10/2021] [Accepted: 12/27/2021] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop and evaluate deep learning models to perform automated diagnosis and quantitative classification of age-related cataract, including all three anatomical types, from anterior segment photographs. DESIGN Application of deep learning models to Age-Related Eye Disease Study (AREDS) dataset. PARTICIPANTS 18,999 photographs (6,333 triplets) from longitudinal follow-up of 1,137 eyes (576 AREDS participants). METHODS Deep learning models were trained to detect and quantify nuclear cataract (NS; scale 0.9-7.1) from 45-degree slit-lamp photographs and cortical (CLO; scale 0-100%) and posterior subcapsular (PSC; scale 0-100%) cataract from retroillumination photographs. Model performance was compared with that of 14 ophthalmologists and 24 medical students. The ground truth labels were from reading center grading. MAIN OUTCOME MEASURES Mean squared error (MSE). RESULTS On the full test set, mean MSE values for the deep learning models were: 0.23 (SD 0.01) for NS, 13.1 (SD 1.6) for CLO, and 16.6 (SD 2.4) for PSC. On a subset of the test set (substantially enriched for positive cases of CLO and PSC), for NS, mean MSE for the models was 0.23 (SD 0.02), compared to 0.98 (SD 0.23; p=0.000001) for the ophthalmologists, and 1.24 (SD 0.33; p=0.000005) for the medical students. For CLO, mean MSE values were 53.5 (SD 14.8), compared to 134.9 (SD 89.9; p=0.003) and 422.0 (SD 944.4; p=0.0007), respectively. For PSC, mean MSE values were 171.9 (SD 38.9), compared to 176.8 (SD 98.0; p=0.67) and 395.2 (SD 632.5; p=0.18), respectively. In external validation on the Singapore Malay Eye Study (sampled to reflect the distribution of cataract severity in AREDS), MSE was 1.27 for NS and 25.5 for PSC. CONCLUSIONS A deep learning framework was able to perform automated and quantitative classification of cataract severity for all three types of age-related cataract. For the two most common types (NS and CLO), the accuracy was significantly superior to that of ophthalmologists; for the least common type (PSC), the accuracy was similar. The framework may have wide potential applications in both clinical and research domains. In the future, such approaches may increase the accessibility of cataract assessment globally. The code and models are publicly available at https://XXX.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | | | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yi Pin Ng
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yong Liu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Xinxing Xu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany; Institute of Molecular and Clinical Ophthalmology Basel, Switzerland; Privatpraxis Prof Jonas und Dr Panda-Jonas, Heidelberg, Germany
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Geoffrey K Broadhead
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Marcus H Colyer
- Department of Ophthalmology, Madigan Army Medical Center, Tacoma, WA, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Jonathan Corsini
- Warfighter Eye Center, Malcolm Grow Medical Clinics and Surgery Center, Joint Base Andrews, MD, USA
| | - Chantal Cousineau-Krieger
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - William Gensheimer
- White River Junction Veterans Affairs Medical Center, White River Junction, VT, USA; Geisel School of Medicine, Dartmouth, NH, USA
| | - David Grasic
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tania Lamba
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - M Teresa Magone
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Arnold Oshinsky
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Boonkit Purt
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA; Department of Ophthalmology, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Soo Y Shin
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | | |
Collapse
|
8
|
Tham YC, Anees A, Zhang L, Goh JHL, Rim TH, Nusinovici S, Hamzah H, Chee ML, Tjio G, Li S, Xu X, Goh R, Tang F, Cheung CYL, Wang YX, Nangia V, Jonas JB, Gopinath B, Mitchell P, Husain R, Lamoureux E, Sabanayagam C, Wang JJ, Aung T, Liu Y, Wong TY, Cheng CY. Referral for disease-related visual impairment using retinal photograph-based deep learning: a proof-of-concept, model development study. The Lancet Digital Health 2021; 3:e29-e40. [DOI: 10.1016/s2589-7500(20)30271-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/14/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
|
9
|
Abstract
The rising popularity of artificial intelligence (AI) in ophthalmology is fuelled by the ever-increasing clinical "big data" that can be used for algorithm development. Cataract is one of the leading causes of visual impairment worldwide. However, compared with other major age-related eye diseases, such as diabetic retinopathy, age-related macular degeneration, and glaucoma, AI development in the domain of cataract is still relatively underexplored. In this regard, several previous studies explored algorithms for automated cataract assessment using either slit lamp of color fundus photographs. However, several other study groups proposed or derived new AI-based calculation for pre-cataract surgery intraocular lens power. Along with advancements in digitization of clinical data, data curation for future cataract-related AI developmental work is bound to undergo significant improvements in the foreseeable future. Even though most of these previous studies reported early promising performances, limitations such as lack of robust, high-quality training data, and lack of external validations remain. In the next phase of work, apart from algorithm's performance, it will also be pertinent to evaluate deployment angles, feasibility, efficiency, and cost-effectiveness of these new cataract-related AI systems.
Collapse
Affiliation(s)
- Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- School of Chemical and Biomedical Engineering, Division of Bioengineering, Nanyang Technological University, Singapore
| | - Zhi Wei Lim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Xiaoling Fang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Shanghai Eye Disease Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China
| | - Ayesha Anees
- Institute of High Performance Computing, A∗STAR, Singapore
| | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|