26
|
Gunasekeran DV, Zheng F, Lim GYS, Chong CCY, Zhang S, Ng WY, Keel S, Xiang Y, Park KH, Park SJ, Chandra A, Wu L, Campbel JP, Lee AY, Keane PA, Denniston A, Lam DSC, Fung AT, Chan PRV, Sadda SR, Loewenstein A, Grzybowski A, Fong KCS, Wu WC, Bachmann LM, Zhang X, Yam JC, Cheung CY, Pongsachareonnont P, Ruamviboonsuk P, Raman R, Sakamoto T, Habash R, Girard M, Milea D, Ang M, Tan GSW, Schmetterer L, Cheng CY, Lamoureux E, Lin H, van Wijngaarden P, Wong TY, Ting DSW. Acceptance and Perception of Artificial Intelligence Usability in Eye Care (APPRAISE) for Ophthalmologists: A Multinational Perspective. Front Med (Lausanne) 2022; 9:875242. [PMID: 36314006 PMCID: PMC9612721 DOI: 10.3389/fmed.2022.875242] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 03/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background Many artificial intelligence (AI) studies have focused on development of AI models, novel techniques, and reporting guidelines. However, little is understood about clinicians' perspectives of AI applications in medical fields including ophthalmology, particularly in light of recent regulatory guidelines. The aim for this study was to evaluate the perspectives of ophthalmologists regarding AI in 4 major eye conditions: diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and cataract. Methods This was a multi-national survey of ophthalmologists between March 1st, 2020 to February 29th, 2021 disseminated via the major global ophthalmology societies. The survey was designed based on microsystem, mesosystem and macrosystem questions, and the software as a medical device (SaMD) regulatory framework chaired by the Food and Drug Administration (FDA). Factors associated with AI adoption for ophthalmology analyzed with multivariable logistic regression random forest machine learning. Results One thousand one hundred seventy-six ophthalmologists from 70 countries participated with a response rate ranging from 78.8 to 85.8% per question. Ophthalmologists were more willing to use AI as clinical assistive tools (88.1%, n = 890/1,010) especially those with over 20 years' experience (OR 3.70, 95% CI: 1.10-12.5, p = 0.035), as compared to clinical decision support tools (78.8%, n = 796/1,010) or diagnostic tools (64.5%, n = 651). A majority of Ophthalmologists felt that AI is most relevant to DR (78.2%), followed by glaucoma (70.7%), AMD (66.8%), and cataract (51.4%) detection. Many participants were confident their roles will not be replaced (68.2%, n = 632/927), and felt COVID-19 catalyzed willingness to adopt AI (80.9%, n = 750/927). Common barriers to implementation include medical liability from errors (72.5%, n = 672/927) whereas enablers include improving access (94.5%, n = 876/927). Machine learning modeling predicted acceptance from participant demographics with moderate to high accuracy, and area under the receiver operating curves of 0.63-0.83. Conclusion Ophthalmologists are receptive to adopting AI as assistive tools for DR, glaucoma, and AMD. Furthermore, ML is a useful method that can be applied to evaluate predictive factors on clinical qualitative questionnaires. This study outlines actionable insights for future research and facilitation interventions to drive adoption and operationalization of AI tools for Ophthalmology.
Collapse
|
27
|
Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull 2021; 139:4-15. [PMID: 34405854 DOI: 10.1093/bmb/ldab016] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/15/2021] [Indexed: 12/14/2022]
Abstract
INTRODUCTION Artificial intelligence (AI) and machine learning (ML) are rapidly evolving fields in various sectors, including healthcare. This article reviews AI's present applications in healthcare, including its benefits, limitations and future scope. SOURCES OF DATA A review of the English literature was conducted with search terms 'AI' or 'ML' or 'deep learning' and 'healthcare' or 'medicine' using PubMED and Google Scholar from 2000-2021. AREAS OF AGREEMENT AI could transform physician workflow and patient care through its applications, from assisting physicians and replacing administrative tasks to augmenting medical knowledge. AREAS OF CONTROVERSY From challenges training ML systems to unclear accountability, AI's implementation is difficult and incremental at best. Physicians also lack understanding of what AI implementation could represent. GROWING POINTS AI can ultimately prove beneficial in healthcare, but requires meticulous governance similar to the governance of physician conduct. AREAS TIMELY FOR DEVELOPING RESEARCH Regulatory guidelines are needed on how to safely implement and assess AI technology, alongside further research into the specific capabilities and limitations of its medical use.
Collapse
|
28
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
|
29
|
Tang F, Wang X, Ran AR, Chan CKM, Ho M, Yip W, Young AL, Lok J, Szeto S, Chan J, Yip F, Wong R, Tang Z, Yang D, Ng DS, Chen LJ, Brelén M, Chu V, Li K, Lai THT, Tan GS, Ting DSW, Huang H, Chen H, Ma JH, Tang S, Leng T, Kakavand S, Mannil SS, Chang RT, Liew G, Gopinath B, Lai TYY, Pang CP, Scanlon PH, Wong TY, Tham CC, Chen H, Heng PA, Cheung CY. A Multitask Deep-Learning System to Classify Diabetic Macular Edema for Different Optical Coherence Tomography Devices: A Multicenter Analysis. Diabetes Care 2021; 44:2078-2088. [PMID: 34315698 PMCID: PMC8740924 DOI: 10.2337/dc20-3064] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 05/29/2021] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Diabetic macular edema (DME) is the primary cause of vision loss among individuals with diabetes mellitus (DM). We developed, validated, and tested a deep learning (DL) system for classifying DME using images from three common commercially available optical coherence tomography (OCT) devices. RESEARCH DESIGN AND METHODS We trained and validated two versions of a multitask convolution neural network (CNN) to classify DME (center-involved DME [CI-DME], non-CI-DME, or absence of DME) using three-dimensional (3D) volume scans and 2D B-scans, respectively. For both 3D and 2D CNNs, we used the residual network (ResNet) as the backbone. For the 3D CNN, we used a 3D version of ResNet-34 with the last fully connected layer removed as the feature extraction module. A total of 73,746 OCT images were used for training and primary validation. External testing was performed using 26,981 images across seven independent data sets from Singapore, Hong Kong, the U.S., China, and Australia. RESULTS In classifying the presence or absence of DME, the DL system achieved area under the receiver operating characteristic curves (AUROCs) of 0.937 (95% CI 0.920-0.954), 0.958 (0.930-0.977), and 0.965 (0.948-0.977) for the primary data set obtained from CIRRUS, SPECTRALIS, and Triton OCTs, respectively, in addition to AUROCs >0.906 for the external data sets. For further classification of the CI-DME and non-CI-DME subgroups, the AUROCs were 0.968 (0.940-0.995), 0.951 (0.898-0.982), and 0.975 (0.947-0.991) for the primary data set and >0.894 for the external data sets. CONCLUSIONS We demonstrated excellent performance with a DL system for the automated classification of DME, highlighting its potential as a promising second-line screening tool for patients with DM, which may potentially create a more effective triaging mechanism to eye clinics.
Collapse
|
30
|
Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, Li JPO, Wei L, Zhu P, Liu Y, Chen W, Ting DSW, Wong TY, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. LANCET DIGITAL HEALTH 2021; 3:e486-e495. [PMID: 34325853 DOI: 10.1016/s2589-7500(21)00086-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 04/21/2021] [Accepted: 05/07/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted. METHODS In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FINDINGS The area under the receiver operating characteristic curve (AUC) in the internal validation set was 0·955 (SD 0·046). AUC values in the external test set were 0·965 (0·035) in tertiary hospitals, 0·983 (0·031) in community hospitals, and 0·953 (0·042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0·960, 95% CI 0·957-0·964 in referable diabetic retinopathy). INTERPRETATION Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care. FUNDING This study was funded by the National Key R&D Programme of China, the Science and Technology Planning Projects of Guangdong Province, the National Natural Science Foundation of China, the Natural Science Foundation of Guangdong Province, and the Fundamental Research Funds for the Central Universities. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
|
31
|
Ting DSW, Wong TY, Park KH, Cheung CY, Tham CC, Lam DSC. Ocular Imaging Standardization for Artificial Intelligence Applications in Ophthalmology: the Joint Position Statement and Recommendations From the Asia-Pacific Academy of Ophthalmology and the Asia-Pacific Ocular Imaging Society. Asia Pac J Ophthalmol (Phila) 2021; 10:348-349. [PMID: 34415245 DOI: 10.1097/apo.0000000000000421] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
32
|
Rampat R, Deshmukh R, Chen X, Ting DSW, Said DG, Dua HS, Ting DSJ. Artificial Intelligence in Cornea, Refractive Surgery, and Cataract: Basic Principles, Clinical Applications, and Future Directions. Asia Pac J Ophthalmol (Phila) 2021; 10:268-281. [PMID: 34224467 PMCID: PMC7611495 DOI: 10.1097/apo.0000000000000394] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
ABSTRACT Corneal diseases, uncorrected refractive errors, and cataract represent the major causes of blindness globally. The number of refractive surgeries, either cornea- or lens-based, is also on the rise as the demand for perfect vision continues to increase. With the recent advancement and potential promises of artificial intelligence (AI) technologies demonstrated in the realm of ophthalmology, particularly retinal diseases and glaucoma, AI researchers and clinicians are now channeling their focus toward the less explored ophthalmic areas related to the anterior segment of the eye. Conditions that rely on anterior segment imaging modalities, including slit-lamp photography, anterior segment optical coherence tomography, corneal tomography, in vivo confocal microscopy and/or optical biometers, are the most commonly explored areas. These include infectious keratitis, keratoconus, corneal grafts, ocular surface pathologies, preoperative screening before refractive surgery, intraocular lens calculation, and automated refraction, among others. In this review, we aimed to provide a comprehensive update on the utilization of AI in anterior segment diseases, with particular emphasis on the recent advancement in the past few years. In addition, we demystify some of the basic principles and terminologies related to AI, particularly machine learning and deep learning, to help improve the understanding, research and clinical implementation of these AI technologies among the ophthalmologists and vision scientists. As we march toward the era of digital health, guidelines such as CONSORT-AI, SPIRIT-AI, and STARD-AI will play crucial roles in guiding and standardizing the conduct and reporting of AI-related trials, ultimately promoting their potential for clinical translation.
Collapse
|
33
|
Ng WY, Tan TE, Xiao Z, Movva PVH, Foo FSS, Yun D, Chen W, Wong TY, Lin HT, Ting DSW. Blockchain Technology for Ophthalmology: Coming of Age? Asia Pac J Ophthalmol (Phila) 2021; 10:343-347. [PMID: 34415244 DOI: 10.1097/apo.0000000000000399] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
34
|
Valikodath NG, Cole E, Ting DSW, Campbell JP, Pasquale LR, Chiang MF, Chan RVP. Impact of Artificial Intelligence on Medical Education in Ophthalmology. Transl Vis Sci Technol 2021; 10:14. [PMID: 34125146 PMCID: PMC8212436 DOI: 10.1167/tvst.10.7.14] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Clinical care in ophthalmology is rapidly evolving as artificial intelligence (AI) algorithms are being developed. The medical community and national and federal regulatory bodies are recognizing the importance of adapting to AI. However, there is a gap in physicians’ understanding of AI and its implications regarding its potential use in clinical care, and there are limited resources and established programs focused on AI and medical education in ophthalmology. Physicians are essential in the application of AI in a clinical context. An AI curriculum in ophthalmology can help provide physicians with a fund of knowledge and skills to integrate AI into their practice. In this paper, we provide general recommendations for an AI curriculum for medical students, residents, and fellows in ophthalmology.
Collapse
|
35
|
Tey KY, Wong QY, Dan YS, Tsai ASH, Ting DSW, Ang M, Cheung GCM, Lee SY, Wong TY, Hoang QV, Wong CW. Association of Aberrant Posterior Vitreous Detachment and Pathologic Tractional Forces With Myopic Macular Degeneration. Invest Ophthalmol Vis Sci 2021; 62:7. [PMID: 34096974 PMCID: PMC8185394 DOI: 10.1167/iovs.62.7.7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Purpose The purpose of this study was to assess whether the tractional elements of pathologic myopia (PM; e.g. myopic traction maculopathy [MTM], posterior staphyloma [PS], and aberrant posterior vitreous detachment [PVD]) are associated with myopic macular degeneration (MMD) independent of age and axial length, among highly myopic (HM) eyes. Methods One hundred twenty-nine individuals with 239 HM eyes from the Myopic and Pathologic Eyes in Singapore (MyoPES) cohort underwent ocular biometry, fundus photography, swept-source optical coherence tomography, and ocular B-scan ultrasound. Images were analyzed for PVD grade, and presence of MTM, PS, and MMD. The χ² test was done to determine the difference in prevalence of MMD between eyes with and without PVD, PS, and MTM. Multivariate probit regression analyses were performed to ascertain the relationship between the potential predictors (PVD, PS, and MTM) and outcome variable (MMD), after accounting for possible confounders (e.g. age and axial length). Marginal effects were reported. Results Controlling for potential confounders, eyes with MTM have a 29.92 percentage point higher likelihood of having MMD (P = 0.003), and eyes with PS have a 25.72 percentage point higher likelihood of having MMD (P = 0.002). The likelihood of MMD increases by 10.61 percentage points per 1 mm increase in axial length (P < 0.001). Subanalysis revealed that eyes with incomplete PVD have a 22.54 percentage point higher likelihood of having MMD than eyes with early PVD (P = 0.04). Conclusions Our study demonstrated an association between tractional (MTM, PS, and persistently incomplete PVD) and degenerative elements of PM independent of age and axial length. These data provide further insights into the pathogenesis of MMD.
Collapse
|
36
|
Valikodath NG, Al-Khaled T, Cole E, Ting DSW, Tu EY, Campbell JP, Chiang MF, Hallak JA, Chan RVP. Evaluation of pediatric ophthalmologists' perspectives of artificial intelligence in ophthalmology. J AAPOS 2021; 25:164.e1-164.e5. [PMID: 34087473 PMCID: PMC8328946 DOI: 10.1016/j.jaapos.2021.01.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 01/12/2021] [Accepted: 01/13/2021] [Indexed: 12/31/2022]
Abstract
PURPOSE To survey pediatric ophthalmologists on their perspectives of artificial intelligence (AI) in ophthalmology. METHODS This is a subgroup analysis of a study previously reported. In March 2019, members of the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) were recruited via the online AAPOS discussion board to voluntarily complete a Web-based survey consisting of 15 items. Survey items assessed the extent participants "agreed" or "disagreed" with statements on the perceived benefits and concerns of AI in ophthalmology. Responses were analyzed using descriptive statistics. RESULTS A total of 80 pediatric ophthalmologists who are members of AAPOS completed the survey. The mean number of years since graduating residency was 21 years (range, 0-46). Overall, 91% (73/80) reported understanding the concept of AI, 70% (56/80) believed AI will improve the practice of ophthalmology, 68% (54/80) reported willingness to incorporate AI into their clinical practice, 65% (52/80) did not believe AI will replace physicians, and 71% (57/80) believed AI should be incorporated into medical school and residency curricula. However, 15% (12/80) were concerned that AI will replace physicians, 26% (21/80) believed AI will harm the patient-physician relationship, and 46% (37/80) reported concern over the diagnostic accuracy of AI. CONCLUSIONS Most pediatric ophthalmologists in this survey viewed the role of AI in ophthalmology positively.
Collapse
|
37
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 189] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
|
38
|
Rim TH, Lee CJ, Tham YC, Cheung N, Yu M, Lee G, Kim Y, Ting DSW, Chong CCY, Choi YS, Yoo TK, Ryu IH, Baik SJ, Kim YA, Kim SK, Lee SH, Lee BK, Kang SM, Wong EYM, Kim HC, Kim SS, Park S, Cheng CY, Wong TY. Deep-learning-based cardiovascular risk stratification using coronary artery calcium scores predicted from retinal photographs. Lancet Digit Health 2021; 3:e306-e316. [PMID: 33890578 DOI: 10.1016/s2589-7500(21)00043-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 02/17/2021] [Accepted: 03/02/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Coronary artery calcium (CAC) score is a clinically validated marker of cardiovascular disease risk. We developed and validated a novel cardiovascular risk stratification system based on deep-learning-predicted CAC from retinal photographs. METHODS We used 216 152 retinal photographs from five datasets from South Korea, Singapore, and the UK to train and validate the algorithms. First, using one dataset from a South Korean health-screening centre, we trained a deep-learning algorithm to predict the probability of the presence of CAC (ie, deep-learning retinal CAC score, RetiCAC). We stratified RetiCAC scores into tertiles and used Cox proportional hazards models to evaluate the ability of RetiCAC to predict cardiovascular events based on external test sets from South Korea, Singapore, and the UK Biobank. We evaluated the incremental values of RetiCAC when added to the Pooled Cohort Equation (PCE) for participants in the UK Biobank. FINDINGS RetiCAC outperformed all single clinical parameter models in predicting the presence of CAC (area under the receiver operating characteristic curve of 0·742, 95% CI 0·732-0·753). Among the 527 participants in the South Korean clinical cohort, 33 (6·3%) had cardiovascular events during the 5-year follow-up. When compared with the current CAC risk stratification (0, >0-100, and >100), the three-strata RetiCAC showed comparable prognostic performance with a concordance index of 0·71. In the Singapore population-based cohort (n=8551), 310 (3·6%) participants had fatal cardiovascular events over 10 years, and the three-strata RetiCAC was significantly associated with increased risk of fatal cardiovascular events (hazard ratio [HR] trend 1·33, 95% CI 1·04-1·71). In the UK Biobank (n=47 679), 337 (0·7%) participants had fatal cardiovascular events over 10 years. When added to the PCE, the three-strata RetiCAC improved cardiovascular risk stratification in the intermediate-risk group (HR trend 1·28, 95% CI 1·07-1·54) and borderline-risk group (1·62, 1·04-2·54), and the continuous net reclassification index was 0·261 (95% CI 0·124-0·364). INTERPRETATION A deep learning and retinal photograph-derived CAC score is comparable to CT scan-measured CAC in predicting cardiovascular events, and improves on current risk stratification approaches for cardiovascular disease events. These data suggest retinal photograph-based deep learning has the potential to be used as an alternative measure of CAC, especially in low-resource settings. FUNDING Yonsei University College of Medicine; Ministry of Health and Welfare, Korea Institute for Advancement of Technology, South Korea; Agency for Science, Technology, and Research; and National Medical Research Council, Singapore.
Collapse
|
39
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 217] [Impact Index Per Article: 72.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
|
40
|
Gunasekeran DV, Tham YC, Ting DSW, Tan GSW, Wong TY. Digital health during COVID-19: lessons from operationalising new models of care in ophthalmology. LANCET DIGITAL HEALTH 2021; 3:e124-e134. [PMID: 33509383 DOI: 10.1016/s2589-7500(20)30287-9] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/13/2022]
Abstract
The COVID-19 pandemic has resulted in massive disruptions within health care, both directly as a result of the infectious disease outbreak, and indirectly because of public health measures to mitigate against transmission. This disruption has caused rapid dynamic fluctuations in demand, capacity, and even contextual aspects of health care. Therefore, the traditional face-to-face patient-physician care model has had to be re-examined in many countries, with digital technology and new models of care being rapidly deployed to meet the various challenges of the pandemic. This Viewpoint highlights new models in ophthalmology that have adapted to incorporate digital health solutions such as telehealth, artificial intelligence decision support for triaging and clinical care, and home monitoring. These models can be operationalised for different clinical applications based on the technology, clinical need, demand from patients, and manpower availability, ranging from out-of-hospital models including the hub-and-spoke pre-hospital model, to front-line models such as the inflow funnel model and monitoring models such as the so-called lighthouse model for provider-led monitoring. Lessons learnt from operationalising these models for ophthalmology in the context of COVID-19 are discussed, along with their relevance for other specialty domains.
Collapse
|
41
|
Campbell JP, Lee AY, Abràmoff M, Keane PA, Ting DSW, Lum F, Chiang MF. Reporting Guidelines for Artificial Intelligence in Medical Research. Ophthalmology 2020; 127:1596-1599. [PMID: 32920029 PMCID: PMC7875521 DOI: 10.1016/j.ophtha.2020.09.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 09/04/2020] [Accepted: 09/08/2020] [Indexed: 11/16/2022] Open
|
42
|
Fenwick EK, Man REK, Gan ATL, Aravindhan A, Tey CS, Soon HJT, Ting DSW, Yeo SIY, Lee SY, Tan G, Wong TY, Lamoureux EL. Validation of a New Diabetic Retinopathy Knowledge and Attitudes Questionnaire in People with Diabetic Retinopathy and Diabetic Macular Edema. Transl Vis Sci Technol 2020; 9:32. [PMID: 33062395 PMCID: PMC7533728 DOI: 10.1167/tvst.9.10.32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 08/21/2020] [Indexed: 12/31/2022] Open
Abstract
Purpose A validated questionnaire assessing diabetic retinopathy (DR)- and diabetic macular edema (DME)-related knowledge (K) and attitudes (A) is lacking. We developed and validated the Diabetic Retinopathy Knowledge and Attitudes (DRKA) questionnaire and explored the association between K and A and the self-reported difficulty accessing DR-related information (hereafter referred to as Access). Methods In this mixed-methods study, eight focus groups with 36 people with DR or DME (mean age, 60.1 ± 8.0 years; 53% male) were conducted to develop content (phase 1). In phase 2, we conducted 10 cognitive interviews to refine item phrasing. In phase 3, we administered 28-item K and nine-item A pilot questionnaires to 200 purposively recruited DR/DME patients (mean age, 59.0 ± 10.6 years; 59% male). The psychometric properties of DRKA were assessed using Rasch and classical methods. The association between K and A and DR-related Access was assessed using univariable linear regression of mean K/A scores against Access. Results Following Rasch-guided amendments, the final 22-item K and nine-item A scales demonstrated adequate psychometric properties, although precision remained borderline. The scales displayed excellent discriminant validity, with K/A scores increasing as education level increased. Compared to those with low scores, those with high K/A scores were more likely to report better access to DR-related information, with K scores of 0.99 ± 0.86 for no difficulty; 0.79 ± 1.05 for a little difficulty; and 0.24 ± 0.85 for moderate or worse difficulty (P < 0.001). Conclusions The psychometrically robust 31-item DRKA questionnaire can measure DR- and DME-related knowledge and attitudes. Translational Relevance The DRKA questionnaire may be useful for interventions to improve DR-related knowledge and attitudes and, in turn, optimize health behaviors and health literacy.
Collapse
|
43
|
Li F, Song D, Chen H, Xiong J, Li X, Zhong H, Tang G, Fan S, Lam DSC, Pan W, Zheng Y, Li Y, Qu G, He J, Wang Z, Jin L, Zhou R, Song Y, Sun Y, Cheng W, Yang C, Fan Y, Li Y, Zhang H, Yuan Y, Xu Y, Xiong Y, Jin L, Lv A, Niu L, Liu Y, Li S, Zhang J, Zangwill LM, Frangi AF, Aung T, Cheng CY, Qiao Y, Zhang X, Ting DSW. Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection. NPJ Digit Med 2020; 3:123. [PMID: 33043147 PMCID: PMC7508974 DOI: 10.1038/s41746-020-00329-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 08/31/2020] [Indexed: 12/02/2022] Open
Abstract
By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of 'iGlaucoma', a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets-200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834-0.877, with a sensitivity of 0.831-0.922 and a specificity of 0.676-0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953-0.979), 0.954 (0.930-0.977), and 0.873 (0.838-0.908), respectively. The 'iGlaucoma' is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.
Collapse
|
44
|
FRCOphth AK, Shantha JG, Olivia Li JP, Faia LJ, Hartley C, Kuthyar S, Albini TA, Wu H, Chodosh J, Ting DSW, Yeh S. SARS-CoV-2 and the Eye: Implications for the Retina Specialist from Human Coronavirus Outbreaks and Animal Models. ACTA ACUST UNITED AC 2020; 4:411-419. [PMID: 33665540 PMCID: PMC7928265 DOI: 10.1177/2474126420939723] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Purpose The current SARS-CoV-2 pandemic has escalated rapidly since December 2019. Understanding the ophthalmic manifestations in patients and animal models of the novel coronavirus may have implications for disease surveillance. Recognition of the potential for viral transmission through the tear film has ramification for protection of patients, physicians, and the public. Methods Information from relevant published journal articles was surveyed using a computerized PubMed search and public health websites. We summarize current knowledge of ophthalmic manifestations of SARS-CoV-2 infection in patients and animal models, risk mitigation measures for patients and their providers, and implications for retina specialists. Results SARS-CoV-2 is efficiently transmitted among humans, and while the clinical course is mild in the majority of infected patients, severe complications including pneumonia, acute respiratory distress syndrome, and death can ensue, most often in elderly patients and individuals with co-morbidities. Conjunctivitis occurs in a small minority of patients with COVID-19 and SARS-CoV-2 RNA has been identified primarily in association with conjunctivitis. Uveitis has been observed in animal models of coronavirus infection and cotton-wool spots have been reported recently. Conclusion SARS-CoV-2 and other coronaviruses have been rarely associated with conjunctivitis. The identification of SARS-CoV and SARS-CoV-2 RNA in the tear film of patients and its highly efficient transmission via respiratory aerosols supports eye protection, mask and gloves as part of infection prevention and control recommendations for retina providers. Disease surveillance during the COVID-19 pandemic outbreak may also include ongoing evaluation for uveitis and retinal disease given prior findings observed in animal models and a recent report of retinal manifestations.
Collapse
|
45
|
Campbell CG, Ting DSW, Keane PA, Foster PJ. The potential application of artificial intelligence for diagnosis and management of glaucoma in adults. Br Med Bull 2020; 134:21-33. [PMID: 32518944 DOI: 10.1093/bmb/ldaa012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 04/02/2020] [Accepted: 04/02/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results.
Collapse
|
46
|
Chua J, Sim R, Tan B, Wong D, Yao X, Liu X, Ting DSW, Schmidl D, Ang M, Garhöfer G, Schmetterer L. Optical Coherence Tomography Angiography in Diabetes and Diabetic Retinopathy. J Clin Med 2020; 9:E1723. [PMID: 32503234 PMCID: PMC7357089 DOI: 10.3390/jcm9061723] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 05/24/2020] [Accepted: 06/02/2020] [Indexed: 12/21/2022] Open
Abstract
Diabetic retinopathy (DR) is a common complication of diabetes mellitus that disrupts the retinal microvasculature and is a leading cause of vision loss globally. Recently, optical coherence tomography angiography (OCTA) has been developed to image the retinal microvasculature, by generating 3-dimensional images based on the motion contrast of circulating blood cells. OCTA offers numerous benefits over traditional fluorescein angiography in visualizing the retinal vasculature in that it is non-invasive and safer; while its depth-resolved ability makes it possible to visualize the finer capillaries of the retinal capillary plexuses and choriocapillaris. High-quality OCTA images have also enabled the visualization of features associated with DR, including microaneurysms and neovascularization and the quantification of alterations in retinal capillary and choriocapillaris, thereby suggesting a promising role for OCTA as an objective technology for accurate DR classification. Of interest is the potential of OCTA to examine the effect of DR on individual retinal layers, and to detect DR even before it is clinically detectable on fundus examination. We will focus the review on the clinical applicability of OCTA derived quantitative metrics that appear to be clinically relevant to the diagnosis, classification, and management of patients with diabetes or DR. Future studies with longitudinal design of multiethnic multicenter populations, as well as the inclusion of pertinent systemic information that may affect vascular changes, will improve our understanding on the benefit of OCTA biomarkers in the detection and progression of DR.
Collapse
|
47
|
Sabanayagam C, Xu D, Ting DSW, Nusinovici S, Banu R, Hamzah H, Lim C, Tham YC, Cheung CY, Tai ES, Wang YX, Jonas JB, Cheng CY, Lee ML, Hsu W, Wong TY. A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations. LANCET DIGITAL HEALTH 2020; 2:e295-e302. [PMID: 33328123 DOI: 10.1016/s2589-7500(20)30063-7] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/19/2020] [Accepted: 03/05/2020] [Indexed: 01/08/2023]
Abstract
BACKGROUND Screening for chronic kidney disease is a challenge in community and primary care settings, even in high-income countries. We developed an artificial intelligence deep learning algorithm (DLA) to detect chronic kidney disease from retinal images, which could add to existing chronic kidney disease screening strategies. METHODS We used data from three population-based, multiethnic, cross-sectional studies in Singapore and China. The Singapore Epidemiology of Eye Diseases study (SEED, patients aged ≥40 years) was used to develop (5188 patients) and validate (1297 patients) the DLA. External testing was done on two independent datasets: the Singapore Prospective Study Program (SP2, 3735 patients aged ≥25 years) and the Beijing Eye Study (BES, 1538 patients aged ≥40 years). Chronic kidney disease was defined as estimated glomerular filtration rate less than 60 mL/min per 1·73m2. Three models were trained: 1) image DLA; 2) risk factors (RF) including age, sex, ethnicity, diabetes, and hypertension; and 3) hybrid DLA combining image and RF. Model performances were evaluated using the area under the receiver operating characteristic curve (AUC). FINDINGS In the SEED validation dataset, the AUC was 0·911 for image DLA (95% CI 0·886 -0·936), 0·916 for RF (0·891-0·941), and 0·938 for hybrid DLA (0·917-0·959). Corresponding estimates in the SP2 testing dataset were 0·733 for image DLA (95% CI 0·696-0·770), 0·829 for RF (0·797-0·861), and 0·810 for hybrid DLA (0·776-0·844); and in the BES testing dataset estimates were 0·835 for image DLA (0·767-0·903), 0·887 for RF (0·828-0·946), and 0·858 for hybrid DLA (0·794-0·922). AUC estimates were similar in subgroups of people with diabetes (image DLA 0·889 [95% CI 0·850-0·928], RF 0·899 [0·862-0·936], hybrid 0·925 [0·893-0·957]) and hypertension (image DLA 0·889 [95% CI 0·860-0·918], RF 0·889 [0·860-0·918], hybrid 0·918 [0·893-0·943]). INTERPRETATION A retinal image DLA shows good performance for estimating chronic kidney disease, underlying the feasibility of using retinal photography as an adjunctive or opportunistic screening tool for chronic kidney disease in community populations. FUNDING National Medical Research Council, Singapore.
Collapse
|
48
|
Olivia Li JP, Shantha J, Wong TY, Wong EY, Mehta J, Lin H, Lin X, Strouthidis NG, Park KH, Fung AT, McLeod SD, Busin M, Parke DW, Holland GN, Chodosh J, Yeh S, Ting DSW. Preparedness among Ophthalmologists: During and Beyond the COVID-19 Pandemic. Ophthalmology 2020; 127:569-572. [PMID: 32327128 PMCID: PMC7167498 DOI: 10.1016/j.ophtha.2020.03.037] [Citation(s) in RCA: 112] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 03/27/2020] [Indexed: 01/02/2023] Open
|
49
|
Xie Y, Nguyen QD, Hamzah H, Lim G, Bellemo V, Gunasekeran DV, Yip MYT, Qi Lee X, Hsu W, Li Lee M, Tan CS, Tym Wong H, Lamoureux EL, Tan GSW, Wong TY, Finkelstein EA, Ting DSW. Artificial intelligence for teleophthalmology-based diabetic retinopathy screening in a national programme: an economic analysis modelling study. LANCET DIGITAL HEALTH 2020; 2:e240-e249. [PMID: 33328056 DOI: 10.1016/s2589-7500(20)30060-1] [Citation(s) in RCA: 115] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 02/18/2020] [Accepted: 02/21/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND Deep learning is a novel machine learning technique that has been shown to be as effective as human graders in detecting diabetic retinopathy from fundus photographs. We used a cost-minimisation analysis to evaluate the potential savings of two deep learning approaches as compared with the current human assessment: a semi-automated deep learning model as a triage filter before secondary human assessment; and a fully automated deep learning model without human assessment. METHODS In this economic analysis modelling study, using 39 006 consecutive patients with diabetes in a national diabetic retinopathy screening programme in Singapore in 2015, we used a decision tree model and TreeAge Pro to compare the actual cost of screening this cohort with human graders against the simulated cost for semi-automated and fully automated screening models. Model parameters included diabetic retinopathy prevalence rates, diabetic retinopathy screening costs under each screening model, cost of medical consultation, and diagnostic performance (ie, sensitivity and specificity). The primary outcome was total cost for each screening model. Deterministic sensitivity analyses were done to gauge the sensitivity of the results to key model assumptions. FINDINGS From the health system perspective, the semi-automated screening model was the least expensive of the three models, at US$62 per patient per year. The fully automated model was $66 per patient per year, and the human assessment model was $77 per patient per year. The savings to the Singapore health system associated with switching to the semi-automated model are estimated to be $489 000, which is roughly 20% of the current annual screening cost. By 2050, Singapore is projected to have 1 million people with diabetes; at this time, the estimated annual savings would be $15 million. INTERPRETATION This study provides a strong economic rationale for using deep learning systems as an assistive tool to screen for diabetic retinopathy. FUNDING Ministry of Health, Singapore.
Collapse
|
50
|
Lim G, Bellemo V, Xie Y, Lee XQ, Yip MYT, Ting DSW. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. EYE AND VISION (LONDON, ENGLAND) 2020; 7:21. [PMID: 32313813 PMCID: PMC7155252 DOI: 10.1186/s40662-020-00182-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
Collapse
|