101
|
Lim G, Bellemo V, Xie Y, Lee XQ, Yip MYT, Ting DSW. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. EYE AND VISION (LONDON, ENGLAND) 2020; 7:21. [PMID: 32313813 PMCID: PMC7155252 DOI: 10.1186/s40662-020-00182-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
Collapse
Affiliation(s)
- Gilbert Lim
- School of Computing, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Valentina Bellemo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Yuchen Xie
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Xin Q. Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Michelle Y. T. Yip
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Vitreo-Retinal Service, Singapore National Eye Center, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Artificial Intelligence in Ophthalmology, Singapore Eye Research Institute, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| |
Collapse
|
102
|
Boucher MC, Nguyen MTD, Qian J. Assessment of Training Outcomes of Nurse Readers for Diabetic Retinopathy Telescreening: Validation Study. JMIR Diabetes 2020; 5:e17309. [PMID: 32255431 PMCID: PMC7175194 DOI: 10.2196/17309] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 03/01/2020] [Accepted: 03/02/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND With the high prevalence of diabetic retinopathy and its significant visual consequences if untreated, timely identification and management of diabetic retinopathy is essential. Teleophthalmology programs have assisted in screening a large number of individuals at risk for vision loss from diabetic retinopathy. Training nonophthalmological readers to assess remote fundus images for diabetic retinopathy may further improve the efficiency of such programs. OBJECTIVE This study aimed to evaluate the performance, safety implications, and progress of 2 ophthalmology nurses trained to read and assess diabetic retinopathy fundus images within a hospital diabetic retinopathy telescreening program. METHODS In this retrospective interobserver study, 2 ophthalmology nurses followed a specific training program within a hospital diabetic retinopathy telescreening program and were trained to assess diabetic retinopathy images at 2 levels of intervention: detection of diabetic retinopathy (level 1) and identification of referable disease (level 2). The reliability of the assessment by level 1-trained readers in 266 patients and of the identification of patients at risk of vision loss from diabetic retinopathy by level 2-trained readers in 559 more patients were measured. The learning curve, sensitivity, and specificity of the readings were evaluated using a group consensus gold standard. RESULTS An almost perfect agreement was measured in identifying the presence of diabetic retinopathy in both level 1 readers (κ=0.86 and 0.80) and in identifying referable diabetic retinopathy by level 2 readers (κ=0.80 and 0.83). At least substantial agreement was measured in the level 2 readers for macular edema (κ=0.79 and 0.88) for all eyes. Good screening threshold sensitivities and specificities were obtained for all level readers, with sensitivities of 90.6% and 96.9% and specificities of 95.1% and 85.1% for level 1 readers (readers A and B) and with sensitivities of 86.8% and 91.2% and specificities of 91.7% and 97.0% for level 2 readers (readers A and B). This performance was achieved immediately after training and remained stable throughout the study. CONCLUSIONS Notwithstanding the small number of trained readers, this study validates the screening performance of level 1 and level 2 diabetic retinopathy readers within this training program, emphasizing practical experience, and allows the establishment of an ongoing assessment clinic. This highlights the importance of supervised, hands-on experience and may help set parameters to further calibrate the training of diabetic retinopathy readers for safe screening programs.
Collapse
Affiliation(s)
- Marie Carole Boucher
- Maisonneuve-Rosemont Ophthalmology University Center, Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | | | - Jenny Qian
- Department of Ophthalmology & Vision Sciences, University of Toronto, Toronto, ON, Canada
- Hamilton Regional Eye Institute, St Joseph's Healthcare Hamilton, Hamilton, ON, Canada
- Division of Ophthalmology, Department of Surgery, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
103
|
Rumbold JMM, O'Kane M, Philip N, Pierscionek BK. Big Data and diabetes: the applications of Big Data for diabetes care now and in the future. Diabet Med 2020; 37:187-193. [PMID: 31148227 DOI: 10.1111/dme.14044] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/29/2019] [Indexed: 12/28/2022]
Abstract
We review current applications of Big Data in diabetes care and consider the future potential by carrying out a scoping study of the academic literature on Big Data and diabetes care. Healthcare data are being produced at ever-increasing rates, and this information has the potential to transform the provision of diabetes care. Big Data is beginning to have an impact on diabetes care through data research. The use of Big Data for routine clinical care is still a future application. Vast amounts of healthcare data are already being produced, and the key is harnessing these to produce actionable insights. Considerable development work is required to achieve these goals.
Collapse
Affiliation(s)
- J M M Rumbold
- School of Science and Technology, Nottingham Trent University, Nottingham
| | - M O'Kane
- Western Health & Social Care Trust, Altnagelvin Area Hospital, Londonderry
| | - N Philip
- School of Computer Science and Mathematics, Kingston University London, Kingston upon Thames, UK
| | - B K Pierscionek
- School of Science and Technology, Nottingham Trent University, Nottingham
| |
Collapse
|
104
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Xiang Y, Xu F, Jin C, Zhang X, Yang Y, Zhang K, Zhao L, Zhang P, Han Y, Yun D, Wu X, Yan P, Lin H. Development and Evaluation of a Deep Learning System for Screening Retinal Hemorrhage Based on Ultra-Widefield Fundus Images. Transl Vis Sci Technol 2020; 9:3. [PMID: 32518708 PMCID: PMC7255628 DOI: 10.1167/tvst.9.2.3] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/21/2019] [Indexed: 12/15/2022] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) system for retinal hemorrhage (RH) screening using ultra-widefield fundus (UWF) images. Methods A total of 16,827 UWF images from 11,339 individuals were used to develop the DL system. Three experienced retina specialists were recruited to grade UWF images independently. Three independent data sets from 3 different institutions were used to validate the effectiveness of the DL system. The data set from Zhongshan Ophthalmic Center (ZOC) was selected to compare the classification performance of the DL system and general ophthalmologists. A heatmap was generated to identify the most important area used by the DL model to classify RH and to discern whether the RH involved the anatomical macula. Results In the three independent data sets, the DL model for detecting RH achieved areas under the curve of 0.997, 0.998, and 0.999, with sensitivities of 97.6%, 96.7%, and 98.9% and specificities of 98.0%, 98.7%, and 99.4%. In the ZOC data set, the sensitivity of the DL model was better than that of the general ophthalmologists, although the general ophthalmologists had slightly higher specificities. The heatmaps highlighted RH regions in all true-positive images, and the RH within the anatomical macula was determined based on heatmaps. Conclusions Our DL system showed reliable performance for detecting RH and could be used to screen for RH-related diseases. Translational Relevance As a screening tool, this automated system may aid early diagnosis and management of RH-related retinal and systemic diseases by allowing timely referral.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
105
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Wu X, Xu F, Jin C, Zhang X, Xiao H, Zhang K, Zhao L, Yan P, Lai W, Li J, Feng W, Li Y, Wei Ting DS, Lin H. Deep learning for detecting retinal detachment and discerning macular status using ultra-widefield fundus images. Commun Biol 2020; 3:15. [PMID: 31925315 PMCID: PMC6949241 DOI: 10.1038/s42003-019-0730-x] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 12/06/2019] [Indexed: 01/04/2023] Open
Abstract
Retinal detachment can lead to severe visual loss if not treated timely. The early diagnosis of retinal detachment can improve the rate of successful reattachment and the visual results, especially before macular involvement. Manual retinal detachment screening is time-consuming and labour-intensive, which is difficult for large-scale clinical applications. In this study, we developed a cascaded deep learning system based on the ultra-widefield fundus images for automated retinal detachment detection and macula-on/off retinal detachment discerning. The performance of this system is reliable and comparable to an experienced ophthalmologist. In addition, this system can automatically provide guidance to patients regarding appropriate preoperative posturing to reduce retinal detachment progression and the urgency of retinal detachment repair. The implementation of this system on a global scale may drastically reduce the extent of vision impairment resulting from retinal detachment by providing timely identification and referral. Li et al. develop a cascaded deep learning system for automated retinal detachment and macular status detection based on ultra-widefield fundus (UWF) images. With reliable and comparable performance to an experienced opthamologist, this system can also provide guidance to patients regarding appropriate preoperative posturing to reduce RD progression.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, 518001, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Hui Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China.,School of Computer Science and Technology, Xidian University, Xi'an, 710071, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Jianyin Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Yonghao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 168751, Singapore, Singapore.,Duke-NUS Medical School, National University of Singapore, Singapore, 119077, Singapore
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510060, China. .,Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, 510060, China.
| |
Collapse
|
106
|
Kalavar M, Al-Khersan H, Sridhar J, Gorniak RJ, Lakhani PC, Flanders AE, Kuriyan AE. Applications of Artificial Intelligence for the Detection, Management, and Treatment of Diabetic Retinopathy. Int Ophthalmol Clin 2020; 60:127-145. [PMID: 33093322 PMCID: PMC8514105 DOI: 10.1097/iio.0000000000000333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Rates of diabetic retinopathy (DR) and diabetic macular edema (DME), a common ocular complication of diabetes mellitus, are increasing worldwide. There is a substantial burden concerning the detection and management of this condition, particularly in low-resource settings, due to limitations such as the time, cost, and labor associated with current screening and treatment methods. Artificial intelligence (AI) is a modality of pattern recognition that has the potential to combat these limitations in a reliable and cost-effective way. This review explores the various applications of AI on the screening, management, and treatment of DR and DME. AI applications for detecting referable DR and DME have been the most thoroughly researched applications for this condition. While some studies exist using AI to stratify DR patients based on the risk of progression, predict treatment outcomes to anti-VEGF therapy, and explore the utilization of AI for clinical trials to develop new treatments for DR, further validation studies on larger datasets are warranted.
Collapse
Affiliation(s)
- Meghana Kalavar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | - Jayanth Sridhar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | | | - Paras C. Lakhani
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Adam E. Flanders
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Ajay E. Kuriyan
- Mid Atlantic Retina, Philadelphia, PA
- The Retina Service, Wills Eye Hospital, Philadelphia, PA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA
| |
Collapse
|
107
|
Sosale B, Aravind SR, Murthy H, Narayana S, Sharma U, Gowda SGV, Naveenam M. Simple, Mobile-based Artificial Intelligence Algo rithm in the detection of Diabetic Retinopathy (SMART) study. BMJ Open Diabetes Res Care 2020; 8:8/1/e000892. [PMID: 32049632 PMCID: PMC7039584 DOI: 10.1136/bmjdrc-2019-000892] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/06/2019] [Accepted: 01/04/2020] [Indexed: 12/17/2022] Open
Abstract
INTRODUCTION The aim of this study is to evaluate the performance of the offline smart phone-based Medios artificial intelligence (AI) algorithm in the diagnosis of diabetic retinopathy (DR) using non-mydriatic (NM) retinal images. METHODS This cross-sectional study prospectively enrolled 922 individuals with diabetes mellitus. NM retinal images (disc and macula centered) from each eye were captured using the Remidio NM fundus-on-phone (FOP) camera. The images were run offline and the diagnosis of the AI was recorded (DR present or absent). The diagnosis of the AI was compared with the image diagnosis of five retina specialists (majority diagnosis considered as ground truth). RESULTS Analysis included images from 900 individuals (252 had DR). For any DR, the sensitivity and specificity of the AI algorithm was found to be 83.3% (95% CI 80.9% to 85.7%) and 95.5% (95% CI 94.1% to 96.8%). The sensitivity and specificity of the AI algorithm in detecting referable DR (RDR) was 93% (95% CI 91.3% to 94.7%) and 92.5% (95% CI 90.8% to 94.2%). CONCLUSION The Medios AI has a high sensitivity and specificity in the detection of RDR using NM retinal images.
Collapse
Affiliation(s)
| | | | - Hemanth Murthy
- Ophthalmology, Retina Institute of Karnataka, Bangalore, India
| | | | - Usha Sharma
- Ophthalmology, Diacon Hospital, Bangalore, India
| | | | | |
Collapse
|
108
|
Armstrong GW, Lorch AC. A(eye): A Review of Current Applications of Artificial Intelligence and Machine Learning in Ophthalmology. Int Ophthalmol Clin 2020; 60:57-71. [PMID: 31855896 DOI: 10.1097/iio.0000000000000298] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
|
109
|
Son J, Shin JY, Kim HD, Jung KH, Park KH, Park SJ. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020; 127:85-94. [DOI: 10.1016/j.ophtha.2019.05.029] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 05/03/2019] [Accepted: 05/24/2019] [Indexed: 12/25/2022] Open
|
110
|
Bhaskaranand M, Ramachandra C, Bhat S, Cuadros J, Nittala MG, Sadda SR, Solanki K. The Value of Automated Diabetic Retinopathy Screening with the EyeArt System: A Study of More Than 100,000 Consecutive Encounters from People with Diabetes. Diabetes Technol Ther 2019; 21:635-643. [PMID: 31335200 PMCID: PMC6812728 DOI: 10.1089/dia.2019.0164] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Background: Current manual diabetic retinopathy (DR) screening using eye care experts cannot scale to screen the growing population of diabetes patients who are at risk for vision loss. EyeArt system is an automated, cloud-based artificial intelligence (AI) eye screening technology designed to easily detect referral-warranted DR immediately through automated analysis of patient's retinal images. Methods: This retrospective study assessed the diagnostic efficacy of the EyeArt system v2.0 analyzing 850,908 fundus images from 101,710 consecutive patient visits, collected from 404 primary care clinics. Presence or absence of referral-warranted DR (more than mild nonproliferative DR [NPDR]) was automatically detected by the EyeArt system for each patient encounter, and its performance was compared against a clinical reference standard of quality-assured grading by rigorously trained certified ophthalmologists and optometrists. Results: Of the 101,710 visits, 75.7% were nonreferable, 19.3% were referable to an eye care specialist, and in 5.0%, the DR level was unknown as per the clinical reference standard. EyeArt screening had 91.3% (95% confidence interval [CI]: 90.9-91.7) sensitivity and 91.1% (95% CI: 90.9-91.3) specificity. For 5446 encounters with potentially treatable DR (more than moderate NPDR and/or diabetic macular edema), the system provided a positive "refer" output to 5363 encounters achieving sensitivity of 98.5%. Conclusions: This study captures variations in real-world clinical practice and shows that an AI DR screening system can be safe and effective in the real world. This study demonstrates the value of this easy-to-use, automated tool for endocrinologists, diabetologists, and general practitioners to address the growing need for DR screening and monitoring.
Collapse
Affiliation(s)
- Malavika Bhaskaranand
- Eyenuk, Inc., Los Angeles, California
- Address correspondence to: Malavika Bhaskaranand, PhD, Eyenuk, Inc., 5850 Canoga Avenue, Suite 250, Los Angeles, CA 91367
| | | | | | | | | | | | | |
Collapse
|
111
|
Sears CM, Nittala MG, Jayadev C, Verhoek M, Fleming A, van Hemert J, Tsui I, Sadda SR. Comparison of Subjective Assessment and Precise Quantitative Assessment of Lesion Distribution in Diabetic Retinopathy. JAMA Ophthalmol 2019; 136:365-371. [PMID: 29470566 DOI: 10.1001/jamaophthalmol.2018.0070] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Importance Predominantly peripheral disease in eyes with nonproliferative diabetic retinopathy (DR) is suggested as a potential strong risk factor for progression to proliferative disease. However, the reliability and optimal method for the assessment of lesion distribution are still uncertain. Objective To compare agreement between subjective assessment and precise quantification of lesion burden in ultrawidefield (UWF) images of eyes with DR. Design, Setting, and Participants This multisite cross-sectional study examines UWF pseudocolor images acquired from DR screening clinic patients from December 20, 2014, through August 1, 2014. Of 104 cases, 161 eyes with DR were included. Data analysis was conducted from June 1, 2016, through December 1, 2016 at the Doheny Image Reading Center. Main Outcomes and Measures Distribution of DR lesions in eyes was assessed subjectively and quantitatively, and eyes were classified as having predominantly central lesions (PCLs) or predominantly peripheral lesions (PPLs). The frequency and surface area (SA) of each lesion type were quantified. Intergrader and subjective vs quantitative classification were compared for level of agreement. Several methods of determining PPL distribution were also compared. Results On subjective frequency-based evaluation by graders, 133 eyes were classified as having PCL, and 28 eyes as having PPL. On exact quantification of lesion SA, 121 eyes were classified as PCL, and 40 eyes as having PPL. On SA-based quantification, 134 eyes were classified as having PCL, and 27 eyes as having PPL. There was a significant difference between qualitative and quantitative classification of DR lesion distribution for both frequency-based (mean difference [SD]: PCL, 6 [2]; PPL, 13 [6]; P < .001) and SA-based (mean difference [SD]: PCL, 6 [1]; PPL, 20 [7]; P < .001) methods. Both intergrader reproducibility and subjective vs quantitative agreement were higher with frequency-based classification. Conclusions and Relevance Subjective assessment of PPL DR lesions on UWF images differed in some cases from precise quantitative assessments, particularly when considering the area of lesions. These findings highlight the benefit of objective quantitative approaches to DR assessment, which may facilitate the development of a more precise DR scoring system.
Collapse
Affiliation(s)
- Connie Martin Sears
- Harvard Medical School, Boston, Massachusetts.,Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, California
| | | | | | | | | | | | - Irena Tsui
- Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, California.,Jules Stein Eye Institute, University of California, Los Angeles.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles
| | - SriniVas R Sadda
- Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, California.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles
| |
Collapse
|
112
|
Abstract
PURPOSE OF REVIEW Diabetic retinopathy (DR) is the leading cause of acquired vision loss in adults across the globe. Early identification and treatment of patients with DR is paramount for vision preservation. The aim of this review paper is to outline current and new imaging techniques and biomarkers that are valuable for clinical diagnosis and management of DR. RECENT FINDINGS Ultrawide field imaging and automated deep learning algorithms are recent advancements on traditional fundus photography and fluorescein angiography. Optical coherence tomography (OCT) and OCT angiography are techniques that image retinal anatomy and vasculature and OCT is routinely used to monitor response to treatment. Many circulating, vitreous, and genetic biomarkers have been studied to facilitate disease detection and development of new treatments. Recent advancements in retinal imaging and identification of promising new biomarkers for DR have the potential to increase detection, risk stratification, and treatment for patients with DR.
Collapse
Affiliation(s)
- Changyow C Kwan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, 645 N. Michigan Avenue, Suite 440, Chicago, IL, 60611, USA
| | - Amani A Fawzi
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, 645 N. Michigan Avenue, Suite 440, Chicago, IL, 60611, USA.
| |
Collapse
|
113
|
Xu Y, Wang Y, Liu B, Tang L, Lv L, Ke X, Ling S, Lu L, Zou H. The diagnostic accuracy of an intelligent and automated fundus disease image assessment system with lesion quantitative function (SmartEye) in diabetic patients. BMC Ophthalmol 2019; 19:184. [PMID: 31412800 PMCID: PMC6694694 DOI: 10.1186/s12886-019-1196-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 08/08/2019] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND With the diabetes mellitus (DM) prevalence increasing annually, the human grading of retinal images to evaluate DR has posed a substantial burden worldwide. SmartEye is a recently developed fundus image processing and analysis system with lesion quantification function for DR screening. It is sensitive to the lesion area and can automatically identify the lesion position and size. We reported the diabetic retinopathy (DR) grading results of SmartEye versus ophthalmologists in analyzing images captured with non-mydriatic fundus cameras in community healthcare centers, as well as DR lesion quantitative analysis results on different disease stages. METHODS This is a cross-sectional study. All the fundus images were collected from the Shanghai Diabetic Eye Study in Diabetics (SDES) program from Apr 2016 to Aug 2017. 19,904 fundus images were acquired from 6013 diabetic patients. The grading results of ophthalmologists and SmartEye are compared. Lesion quantification of several images at different DR stages is also presented. RESULTS The sensitivity for diagnosing no DR, mild NPDR (non-proliferative diabetic retinopathy), moderate NPDR, severe NPDR, PDR (proliferative diabetic retinopathy) are 86.19, 83.18, 88.64, 89.59, and 85.02%. The specificity are 63.07, 70.96, 64.16, 70.38, and 74.79%, respectively. The AUC are PDR, 0.80 (0.79, 0.81); severe NPDR, 0.80 (0.79, 0.80); moderate NPDR, 0.77 (0.76, 0.77); and mild NPDR, 0.78 (0.77, 0.79). Lesion quantification results showed that the total hemorrhage area, maximum hemorrhage area, total exudation area, and maximum exudation area increase with DR severity. CONCLUSIONS SmartEye has a high diagnostic accuracy in DR screening program using non-mydriatic fundus cameras. SmartEye quantitative analysis may be an innovative and promising method of DR diagnosis and grading.
Collapse
Affiliation(s)
- Yi Xu
- Shanghai Eye Disease Prevention & Treatment Center / Shanghai Eye Hospital, Shanghai Key Laboratory of Ocular Fundus Diseases; Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, 380 Kangding Road, Shanghai, 200040 China
| | - Yongyi Wang
- Shenzhen Nanshan Center for Chronic Disease Control, No. 7, Huaming Road, Nanshan District, Shenzhen, 518064 China
| | - Bin Liu
- Shanghai Radio Equipment Research Institute, No. 203, Liping Road, Shanghai, 200090 China
| | - Lin Tang
- Shanghai Radio Equipment Research Institute, No. 203, Liping Road, Shanghai, 200090 China
| | - Liangqing Lv
- Shanghai Radio Equipment Research Institute, No. 203, Liping Road, Shanghai, 200090 China
| | - Xin Ke
- EVision technology (Beijing) Co. LTD., No.26, Shangdixinxi Road, Haidian District, Beijing, 100085 China
| | - Saiguang Ling
- EVision technology (Beijing) Co. LTD., No.26, Shangdixinxi Road, Haidian District, Beijing, 100085 China
| | - Lina Lu
- Shanghai Eye Disease Prevention & Treatment Center / Shanghai Eye Hospital, Shanghai Key Laboratory of Ocular Fundus Diseases; Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, 380 Kangding Road, Shanghai, 200040 China
| | - Haidong Zou
- Shanghai Eye Disease Prevention & Treatment Center / Shanghai Eye Hospital, Shanghai Key Laboratory of Ocular Fundus Diseases; Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, 380 Kangding Road, Shanghai, 200040 China
| |
Collapse
|
114
|
Bellemo V, Lim G, Rim TH, Tan GSW, Cheung CY, Sadda S, He MG, Tufail A, Lee ML, Hsu W, Ting DSW. Artificial Intelligence Screening for Diabetic Retinopathy: the Real-World Emerging Application. Curr Diab Rep 2019; 19:72. [PMID: 31367962 DOI: 10.1007/s11892-019-1189-3] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
PURPOSE OF REVIEW This paper systematically reviews the recent progress in diabetic retinopathy screening. It provides an integrated overview of the current state of knowledge of emerging techniques using artificial intelligence integration in national screening programs around the world. Existing methodological approaches and research insights are evaluated. An understanding of existing gaps and future directions is created. RECENT FINDINGS Over the past decades, artificial intelligence has emerged into the scientific consciousness with breakthroughs that are sparking increasing interest among computer science and medical communities. Specifically, machine learning and deep learning (a subtype of machine learning) applications of artificial intelligence are spreading into areas that previously were thought to be only the purview of humans, and a number of applications in ophthalmology field have been explored. Multiple studies all around the world have demonstrated that such systems can behave on par with clinical experts with robust diagnostic performance in diabetic retinopathy diagnosis. However, only few tools have been evaluated in clinical prospective studies. Given the rapid and impressive progress of artificial intelligence technologies, the implementation of deep learning systems into routinely practiced diabetic retinopathy screening could represent a cost-effective alternative to help reduce the incidence of preventable blindness around the world.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - SriniVas Sadda
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | - Ming-Guang He
- Center of Eye Research Australia, Melbourne, Victoria, Australia
| | - Adnan Tufail
- Moorfields Eye Hospital & Institute of Ophthalmology, UCL, London, UK
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
115
|
Keel S, Li Z, Scheetz J, Robman L, Phung J, Makeyeva G, Aung K, Liu C, Yan X, Meng W, Guymer R, Chang R, He M. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin Exp Ophthalmol 2019; 47:1009-1018. [PMID: 31215760 DOI: 10.1111/ceo.13575] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 05/24/2019] [Accepted: 06/13/2019] [Indexed: 12/17/2022]
Abstract
IMPORTANCE Detection of early onset neovascular age-related macular degeneration (AMD) is critical to protecting vision. BACKGROUND To describe the development and validation of a deep-learning algorithm (DLA) for the detection of neovascular age-related macular degeneration. DESIGN Development and validation of a DLA using retrospective datasets. PARTICIPANTS We developed and trained the DLA using 56 113 retinal images and an additional 86 162 images from an independent dataset to externally validate the DLA. All images were non-stereoscopic and retrospectively collected. METHODS The internal validation dataset was derived from real-world clinical settings in China. Gold standard grading was assigned when consensus was reached by three individual ophthalmologists. The DLA classified 31 247 images as gradable and 24 866 as ungradable (poor quality or poor field definition). These ungradable images were used to create a classification model for image quality. Efficiency and diagnostic accuracy were tested using 86 162 images derived from the Melbourne Collaborative Cohort Study. Neovascular AMD and/or ungradable outcome in one or both eyes was considered referable. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC), sensitivity and specificity. RESULTS In the internal validation dataset, the AUC, sensitivity and specificity of the DLA for neovascular AMD was 0.995, 96.7%, 96.4%, respectively. Testing against the independent external dataset achieved an AUC, sensitivity and specificity of 0.967, 100% and 93.4%, respectively. More than 60% of false positive cases displayed other macular pathologies. Amongst the false negative cases (internal validation dataset only), over half (57.2%) proved to be undetected detachment of the neurosensory retina or RPE layer. CONCLUSIONS AND RELEVANCE This DLA shows robust performance for the detection of neovascular AMD amongst retinal images from a multi-ethnic sample and under different imaging protocols. Further research is warranted to investigate where this technology could be best utilized within screening and research settings.
Collapse
Affiliation(s)
- Stuart Keel
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, China
| | - Jane Scheetz
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Liubov Robman
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia.,Monash University Melbourne, Melbourne, Victoria, Australia
| | - James Phung
- Monash University Melbourne, Melbourne, Victoria, Australia
| | - Galina Makeyeva
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - KhinZaw Aung
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Chi Liu
- Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Xixi Yan
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Wei Meng
- Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, California
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
116
|
|
117
|
Lopes BT, Eliasy A, Ambrosio R. Artificial Intelligence in Corneal Diagnosis: Where Are we? CURRENT OPHTHALMOLOGY REPORTS 2019. [DOI: 10.1007/s40135-019-00218-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
118
|
Sugimoto M, Ichio A, Mochida D, Tenma Y, Miyata R, Matsubara H, Kondo M. Multiple Effects of Intravitreal Aflibercept on Microvascular Regression in Eyes with Diabetic Macular Edema. Ophthalmol Retina 2019; 3:1067-1075. [PMID: 31446029 DOI: 10.1016/j.oret.2019.06.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 05/29/2019] [Accepted: 06/10/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE To evaluate the effects of intravitreal aflibercept (IVA) on the number of microaneurysms and sizes of nonperfused areas (NPAs) in eyes with diabetic macular edema (DME). DESIGN Interventional, prospective study. PARTICIPANTS Twenty-five eyes of 25 DME patients (average age, 64.0±8.8 years) were treated with 3 consecutive monthly IVA injections. METHODS Fluorescein angiography (FA) and OCT were performed before the IVA injections (baseline) and at 1 week after the IVA treatment. The number of microaneurysms and the ischemic index (ISI), a measure of NPA, were determined. The correlations between central retinal thickness (CRT) and number of microaneurysms and the ISI were also determined. MAIN OUTCOME MEASURES The mean number of microaneurysms and NPA evaluated as the ISI. RESULTS At baseline, the mean CRT was 485.7±90.6 μm. After treatment, the mean CRT was reduced significantly to 376.9±81.6 μm (P = 0.1 × 10-5, repeated analysis of variance). The mean number of microaneurysms was decreased significantly from 49.6±33.2 at baseline to 24.8±18.1 at 3 months after the initial treatment. This was a 50.4±21.2% reduction (P = 0.3 × 10-5, paired t test). The mean ISI was also decreased significantly from 55.5±20.4% at baseline to 28.8±16.8% after treatment (P = 0.3 × 10-5, paired t test). This was a reduction of 43.3±28.5%. A significant correlation was found between the CRT and number of microaneurysms at both baseline (r = 0.56; P = 0.004) and after treatment (r = 0.53; P = 0.006). A significant correlation was found between CRT and ISI at baseline (r = -0.39; P = 0.03) but not after treatment (r = -0.06; P = 0.79). CONCLUSIONS The reduction in the number of microaneurysms was correlated with reduction in CRT.
Collapse
Affiliation(s)
- Masahiko Sugimoto
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan.
| | - Atushi Ichio
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Daiki Mochida
- Faculty of Medicine, Mie University Graduate School of Medicine, Tsu, Japan
| | - Yumiho Tenma
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Ryohei Miyata
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Hisashi Matsubara
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan
| | - Mineo Kondo
- Department of Ophthalmology, Mie University Graduate School of Medicine, Tsu, Japan
| |
Collapse
|
119
|
Lin SR, Ladas JG, Bahadur GG, Al-Hashimi S, Pineda R. A Review of Machine Learning Techniques for Keratoconus Detection and Refractive Surgery Screening. Semin Ophthalmol 2019; 34:317-326. [PMID: 31304857 DOI: 10.1080/08820538.2019.1620812] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Various machine learning techniques have been developed for keratoconus detection and refractive surgery screening. These techniques utilize inputs from a range of corneal imaging devices and are built with automated decision trees, support vector machines, and various types of neural networks. In general, these techniques demonstrate very good differentiation of normal and keratoconic eyes, as well as good differentiation of normal and form fruste keratoconus. However, it is difficult to directly compare these studies, as keratoconus represents a wide spectrum of disease. More importantly, no public dataset exists for research purposes. Despite these challenges, machine learning in keratoconus detection and refractive surgery screening is a burgeoning field of study, with significant potential for continued advancement as imaging devices and techniques become more sophisticated.
Collapse
Affiliation(s)
- Shawn R Lin
- a Massachusetts Eye and Ear Infirmary , Harvard Medical School , Boston , MA , USA
| | - John G Ladas
- b Wilmer Eye Institute , Johns Hopkins Medical Institutions , Baltimore , MD , USA
| | - Gavin G Bahadur
- c Stein Eye Institute, David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| | - Saba Al-Hashimi
- c Stein Eye Institute, David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| | - Roberto Pineda
- a Massachusetts Eye and Ear Infirmary , Harvard Medical School , Boston , MA , USA
| |
Collapse
|
120
|
Ratanapakorn T, Daengphoonphol A, Eua-Anant N, Yospaiboon Y. Digital image processing software for diagnosing diabetic retinopathy from fundus photograph. Clin Ophthalmol 2019; 13:641-648. [PMID: 31118551 PMCID: PMC6475101 DOI: 10.2147/opth.s195617] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Objective The aim of this study was to develop automated software for screening and diagnosing diabetic retinopathy (DR) from fundus photograph of patients with diabetes mellitus. Methods The extraction of clinically significant features to detect pathologies of DR and the severity classification were performed by using MATLAB R2015a with MATLAB Image Processing Toolbox. In addition, the graphic user interface was developed using the MATLAB GUI Toolbox. The accuracy of software was measured by comparing the obtained results to those of the diagnosis by the ophthalmologist. Results A set of 400 fundus images, containing 21 normal fundus images and 379 DR fundus images (162 non-proliferative DR and 217 proliferative DR), was interpreted by the ophthalmologist as a reference standard. The initial result showed that the sensitivity, specificity and accuracy of this software in detection of DR were 98%, 67% and 96.25%, respectively. However, the accuracy of this software in classifying non-proliferative and proliferative diabetic retinopathy was 66.58%. The average time for processing is 7 seconds for one fundus image. Conclusion The automated DR screening software was developed by using MATLAB programming and yielded 96.25% accuracy for the detection of DR when compared to that of the diagnosis by the ophthalmologist. It may be a helpful tool for DR screening in the distant rural area where ophthalmologist is not available.
Collapse
Affiliation(s)
- Tanapat Ratanapakorn
- KKU Eye Center, Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand,
| | - Athiwath Daengphoonphol
- Department of Computer Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen, Thailand
| | - Nawapak Eua-Anant
- Department of Computer Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen, Thailand
| | - Yosanan Yospaiboon
- KKU Eye Center, Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand,
| |
Collapse
|
121
|
Daien V, Muyl-Cipollina A. [Can Big Data change our practices?]. J Fr Ophtalmol 2019; 42:551-571. [PMID: 30979558 DOI: 10.1016/j.jfo.2018.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 11/22/2018] [Indexed: 11/19/2022]
Abstract
The European Medicines Agency has defined Big Data by the "3 V's": Volume, Velocity and Variety. These large databases allow access to real life data on patient care. They are particularly suited for studies of adverse events and pharmacoepidemiology. Deep learning is a collection of algorithms used in machine learning, used to model high-level abstractions in data using model architectures, which are composed of multiple nonlinear transformations. This article shows how Big Data and Deep Learning can help in ophthalmology, pointing out their advantages and disadvantages. A literature review is presented in this article illustrating the uses of Deep Learning in ophthalmology.
Collapse
Affiliation(s)
- V Daien
- Service d'ophtalmologique, hôpital Gui De Chauliac, 80, avenue Augustin Fliche, 34295 Montpellier, France; Inserm, epidemiological and clinical research, université Montpellier, 34295 Montpellier, France; The Save Sight Institute, Sydney Medical School, The University of Sydney, Sydney, Australie
| | - A Muyl-Cipollina
- Service d'ophtalmologique, hôpital Gui De Chauliac, 80, avenue Augustin Fliche, 34295 Montpellier, France.
| |
Collapse
|
122
|
|
123
|
Felfeli T, Alon R, Merritt R, Brent MH. Toronto tele-retinal screening program for detection of diabetic retinopathy and macular edema. Can J Ophthalmol 2019; 54:203-211. [DOI: 10.1016/j.jcjo.2018.07.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 07/10/2018] [Accepted: 07/11/2018] [Indexed: 12/31/2022]
|
124
|
Lin H, Li R, Liu Z, Chen J, Yang Y, Chen H, Lin Z, Lai W, Long E, Wu X, Lin D, Zhu Y, Chen C, Wu D, Yu T, Cao Q, Li X, Li J, Li W, Wang J, Yang M, Hu H, Zhang L, Yu Y, Chen X, Hu J, Zhu K, Jiang S, Huang Y, Tan G, Huang J, Lin X, Zhang X, Luo L, Liu Y, Liu X, Cheng B, Zheng D, Wu M, Chen W, Liu Y. Diagnostic Efficacy and Therapeutic Decision-making Capacity of an Artificial Intelligence Platform for Childhood Cataracts in Eye Clinics: A Multicentre Randomized Controlled Trial. EClinicalMedicine 2019; 9:52-59. [PMID: 31143882 PMCID: PMC6510889 DOI: 10.1016/j.eclinm.2019.03.001] [Citation(s) in RCA: 97] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 02/12/2019] [Accepted: 03/03/2019] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND CC-Cruiser is an artificial intelligence (AI) platform developed for diagnosing childhood cataracts and providing risk stratification and treatment recommendations. The high accuracy of CC-Cruiser was previously validated using specific datasets. The objective of this study was to compare the diagnostic efficacy and treatment decision-making capacity between CC-Cruiser and ophthalmologists in real-world clinical settings. METHODS This multicentre randomized controlled trial was performed in five ophthalmic clinics in different areas across China. Pediatric patients (aged ≤ 14 years) without a definitive diagnosis of cataracts or history of previous eye surgery were randomized (1:1) to receive a diagnosis and treatment recommendation from either CC-Cruiser or senior consultants (with over 5 years of clinical experience in pediatric ophthalmology). The experts who provided a gold standard diagnosis, and the investigators who performed slit-lamp photography and data analysis were blinded to the group assignments. The primary outcome was the diagnostic performance for childhood cataracts with reference to cataract experts' standards. The secondary outcomes included the evaluation of disease severity and treatment determination, the time required for the diagnosis, and patient satisfaction, which was determined by the mean rating. This trial is registered with ClinicalTrials.gov (NCT03240848). FINDINGS Between August 9, 2017 and May 25, 2018, 350 participants (700 eyes) were randomly assigned for diagnosis by CC-Cruiser (350 eyes) or senior consultants (350 eyes). The accuracies of cataract diagnosis and treatment determination were 87.4% and 70.8%, respectively, for CC-Cruiser, which were significantly lower than 99.1% and 96.7%, respectively, for senior consultants (p < 0.001, OR = 0.06 [95% CI 0.02 to 0.19]; and p < 0.001, OR = 0.08 [95% CI 0.03 to 0.25], respectively). The mean time for receiving a diagnosis from CC-Cruiser was 2.79 min, which was significantly less than 8.53 min for senior consultants (p < 0.001, mean difference 5.74 [95% CI 5.43 to 6.05]). The patients were satisfied with the overall medical service quality provided by CC-Cruiser, typically with its time-saving feature in cataract diagnosis. INTERPRETATION CC-Cruiser exhibited less accurate performance comparing to senior consultants in diagnosing childhood cataracts and making treatment decisions. However, the medical service provided by CC-Cruiser was less time-consuming and achieved a high level of patient satisfaction. CC-Cruiser has the capacity to assist human doctors in clinical practice in its current state. FUNDING National Key R&D Program of China (2018YFC0116500) and the Key Research Plan for the National Natural Science Foundation of China in Cultivation Project (91846109).
Collapse
Affiliation(s)
- Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Hui Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Dongxuan Wu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Tongyong Yu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Jinghui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Mingmin Yang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, The Second Affiliated Hospital of Jinan University, Shenzhen, Guangdong 518040, China
| | - Huiling Hu
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, The Second Affiliated Hospital of Jinan University, Shenzhen, Guangdong 518040, China
| | - Li Zhang
- Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430014, China
| | - Yang Yu
- Department of Ophthalmology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Xuelan Chen
- Department of Ophthalmology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Jianmin Hu
- Department of Ophthalmology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Ke Zhu
- Kaifeng Eye Hospital, Kaifeng, Henan 475000, China
| | - Shuhong Jiang
- Inner Mongolia People's Hospital, Hohhot, Inner Mongolia 010017, China
| | - Yalin Huang
- Henan Eye Institute, Henan Eye Hospital, Henan Provincial People's Hospital and People's Hospital of Zhengzhou University, Zhengzhou, Henan 450003, China
| | - Gang Tan
- The First Affiliated Hospital of the University of South China, Hengyang, Hunan 421001, China
| | - Jialing Huang
- School of Public Health, Sun Yat-sen University, Guangzhou, Guangdong 510080, China
| | - Xiaoming Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xinyu Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yuhua Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Xialin Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Bing Cheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Danying Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Mingxing Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong 510060, China
| |
Collapse
|
125
|
Raman R, Srinivasan S, Virmani S, Sivaprasad S, Rao C, Rajalakshmi R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye (Lond) 2019; 33:97-109. [PMID: 30401899 PMCID: PMC6328553 DOI: 10.1038/s41433-018-0269-y] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 10/07/2018] [Indexed: 02/05/2023] Open
Abstract
Remarkable advances in biomedical research have led to the generation of large amounts of data. Using artificial intelligence, it has become possible to extract meaningful information from large volumes of data, in a shorter frame of time, with very less human interference. In effect, convolutional neural networks (a deep learning method) have been taught to recognize pathological lesions from images. Diabetes has high morbidity, with millions of people who need to be screened for diabetic retinopathy (DR). Deep neural networks offer a great advantage of screening for DR from retinal images, in improved identification of DR lesions and risk factors for diseases, with high accuracy and reliability. This review aims to compare the current evidences on various deep learning models for diagnosis of diabetic retinopathy (DR).
Collapse
Affiliation(s)
- Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, 600006, India.
| | | | - Sunny Virmani
- Verily Life Sciences LLC, South San Francisco, California, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, London, EC1V 2PD, UK
| | - Chetan Rao
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, 600006, India
| | - Ramachandran Rajalakshmi
- Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, 600086, India
| |
Collapse
|
126
|
Nielsen KB, Lautrup ML, Andersen JKH, Savarimuthu TR, Grauslund J. Deep Learning-Based Algorithms in Screening of Diabetic Retinopathy: A Systematic Review of Diagnostic Performance. Ophthalmol Retina 2018; 3:294-304. [PMID: 31014679 DOI: 10.1016/j.oret.2018.10.014] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 10/17/2018] [Accepted: 10/19/2018] [Indexed: 01/29/2023]
Abstract
TOPIC Diagnostic performance of deep learning-based algorithms in screening patients with diabetes for diabetic retinopathy (DR). The algorithms were compared with the current gold standard of classification by human specialists. CLINICAL RELEVANCE Because DR is a common cause of visual impairment, screening is indicated to avoid irreversible vision loss. Automated DR classification using deep learning may be a suitable new screening tool that could improve diagnostic performance and reduce manpower. METHODS For this systematic review, we aimed to identify studies that incorporated the use of deep learning in classifying full-scale DR in retinal fundus images of patients with diabetes. The studies had to provide a DR grading scale, a human grader as a reference standard, and a deep learning performance score. A systematic search on April 5, 2018, through MEDLINE and Embase yielded 304 publications. To identify potentially missed publications, the reference lists of the final included studies were manually screened, yielding no additional publications. The Quality Assessment of Diagnostic Accuracy Studies 2 tool was used for risk of bias and applicability assessment. RESULTS By using objective selection, we included 11 diagnostic accuracy studies that validated the performance of their deep learning method using a new group of patients or retrospective datasets. Eight studies reported sensitivity and specificity of 80.28% to 100.0% and 84.0% to 99.0%, respectively. Two studies report accuracies of 78.7% and 81.0%. One study provides an area under the receiver operating curve of 0.955. In addition to diagnostic performance, one study also reported on patient satisfaction, showing that 78% of patients preferred an automated deep learning model over manual human grading. CONCLUSIONS Advantages of implementing deep learning-based algorithms in DR screening include reduction in manpower, cost of screening, and issues relating to intragrader and intergrader variability. However, limitations that may hinder such an implementation particularly revolve around ethical concerns regarding lack of trust in the diagnostic accuracy of computers. Considering both strengths and limitations, as well as the high performance of deep learning-based algorithms, automated DR classification using deep learning could be feasible in a real-world screening scenario.
Collapse
Affiliation(s)
- Katrine B Nielsen
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark; Research Unit of Ophthalmology, Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| | - Mie L Lautrup
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark; Research Unit of Ophthalmology, Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| | - Jakob K H Andersen
- Steno Diabetes Center Odense, Odense, Denmark; SDU Robotics, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark
| | - Thiusius R Savarimuthu
- SDU Robotics, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark; Research Unit of Ophthalmology, Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Steno Diabetes Center Odense, Odense, Denmark.
| |
Collapse
|
127
|
Li Z, Keel S, Liu C, He M. Can Artificial Intelligence Make Screening Faster, More Accurate, and More Accessible? Asia Pac J Ophthalmol (Phila) 2018; 7:436-441. [PMID: 30556381 DOI: 10.22608/apo.2018438] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Diabetic retinopathy, glaucoma, and age-related macular degeneration are leading causes of vision loss and blindness worldwide. They tend to be asymptomatic in the early phase of disease and therefore require active screening programs to identify the patients requiring referral and treatment. Deep learning-based artificial intelligence technology has recently become a major topic in the field of ophthalmology. This paper aimed to provide a general view of the major findings on the application of deep learning for the classification of eye diseases from common imaging modalities. In the future, it is expected that these technologies will be applied in real-world screening programs to improve their efficiency and affordability.
Collapse
Affiliation(s)
- Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Stuart Keel
- Centre for Eye Research Australia, Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| |
Collapse
|
128
|
|
129
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 366] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
130
|
Iragorri N, Spackman E. Assessing the value of screening tools: reviewing the challenges and opportunities of cost-effectiveness analysis. Public Health Rev 2018; 39:17. [PMID: 30009081 PMCID: PMC6043991 DOI: 10.1186/s40985-018-0093-8] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 04/04/2018] [Indexed: 12/29/2022] Open
Abstract
Background Screening is an important part of preventive medicine. Ideally, screening tools identify patients early enough to provide treatment and avoid or reduce symptoms and other consequences, improving health outcomes of the population at a reasonable cost. Cost-effectiveness analyses combine the expected benefits and costs of interventions and can be used to assess the value of screening tools. Objective This review seeks to evaluate the latest cost-effectiveness analyses on screening tools to identify the current challenges encountered and potential methods to overcome them. Methods A systematic literature search of EMBASE and MEDLINE identified cost-effectiveness analyses of screening tools published in 2017. Data extracted included the population, disease, screening tools, comparators, perspective, time horizon, discounting, and outcomes. Challenges and methodological suggestions were narratively synthesized. Results Four key categories were identified: screening pathways, pre-symptomatic disease, treatment outcomes, and non-health benefits. Not all studies included treatment outcomes; 15 studies (22%) did not include treatment following diagnosis. Quality-adjusted life years were used by 35 (51.4%) as the main outcome. Studies that undertook a societal perspective did not report non-health benefits and costs consistently. Two important challenges identified were (i) estimating the sojourn time, i.e., the time between when a patient can be identified by screening tests and when they would have been identified due to symptoms, and (ii) estimating the treatment effect and progression rates of patients identified early. Conclusions To capture all important costs and outcomes of a screening tool, screening pathways should be modeled including patient treatment. Also, false positive and false negative patients are likely to have important costs and consequences and should be included in the analysis. As these patients are difficult to identify in regular data sources, common treatment patterns should be used to determine how these patients are likely to be treated. It is important that assumptions are clearly indicated and that the consequences of these assumptions are tested in sensitivity analyses, particularly the assumptions of independence of consecutive tests and the level of patient and provider compliance to guidelines and sojourn times. As data is rarely available regarding the progression of undiagnosed patients, extrapolation from diagnosed patients may be necessary.
Collapse
Affiliation(s)
- Nicolas Iragorri
- 1Department of Community Health Sciences and O'Brien Institute for Public Health, University of Calgary, Teaching, Research and Wellness Building, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada.,2Health Technology Assessment Unit, University of Calgary, Teaching, Research and Wellness Building, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada
| | - Eldon Spackman
- 1Department of Community Health Sciences and O'Brien Institute for Public Health, University of Calgary, Teaching, Research and Wellness Building, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada.,2Health Technology Assessment Unit, University of Calgary, Teaching, Research and Wellness Building, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada
| |
Collapse
|
131
|
|
132
|
Sharma S, Maheshwari S, Shukla A. An intelligible deep convolution neural network based approach for classification of diabetic retinopathy. BIO-ALGORITHMS AND MED-SYSTEMS 2018. [DOI: 10.1515/bams-2018-0011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Abstract
Deep convolution neural networks (CNNs) have demonstrated their capabilities in modern-day medical image classification and analysis. The vital edge of deep CNN over other techniques is their ability to train without expert knowledge. Time bound detection is very beneficial for the early cure of disease. In this paper, a deep CNN architecture is proposed to classify nondiabetic retinopathy and diabetic retinopathy fundus eye images. Kaggle 2015 diabetic retinopathy competition dataset and messier experiment dataset are used in this study. The proposed deep CNN algorithm produces significant results with 93% area under the curve (AUC) for the Kaggle dataset and 91% AUC for the Messidor dataset. The sensitivity and specificity for the Kaggle dataset are 90.22% and 85.13%, respectively; the corresponding values of the Messidor dataset are 91.07% and 80.23%, respectively. The results outperformed many existing studies. The present architecture is a promising tool for diabetic retinopathy image classification.
Collapse
|
133
|
Machine Learning Has Arrived! Ophthalmology 2018; 124:1726-1728. [PMID: 29157423 DOI: 10.1016/j.ophtha.2017.08.046] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 08/09/2017] [Accepted: 08/16/2017] [Indexed: 11/21/2022] Open
|
134
|
Abstract
PURPOSE OF REVIEW To describe the emerging applications of deep learning in ophthalmology. RECENT FINDINGS Recent studies have shown that various deep learning models are capable of detecting and diagnosing various diseases afflicting the posterior segment of the eye with high accuracy. Most of the initial studies have centered around detection of referable diabetic retinopathy, age-related macular degeneration, and glaucoma. SUMMARY Deep learning has shown promising results in automated image analysis of fundus photographs and optical coherence tomography images. Additional testing and research is required to clinically validate this technology.
Collapse
|
135
|
Chee RI, Darwish D, Fernandez-Vega A, Patel S, Jonas K, Ostmo S, Campbell JP, Chiang MF, Chan RVP. Retinal Telemedicine. CURRENT OPHTHALMOLOGY REPORTS 2018; 6:36-45. [PMID: 30140593 PMCID: PMC6101043 DOI: 10.1007/s40135-018-0161-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
PURPOSE OF REVIEW An update and overview of the literature on current telemedicine applications in retina. RECENT FINDINGS The application of telemedicine to the field of Ophthalmology and Retina has been growing with advancing technologies in ophthalmic imaging. Retinal telemedicine has been most commonly applied to diabetic retinopathy and retinopathy of prematurity in adult and pediatric patients respectively. Telemedicine has the potential to alleviate the growing demand for clinical evaluation of retinal diseases. Subsequently, automated image analysis and deep learning systems may facilitate efficient processing of large, increasing numbers of images generated in telemedicine systems. Telemedicine may additionally improve access to education and standardized training through tele-education systems. SUMMARY Telemedicine has the potential to be utilized as a useful adjunct but not a complete replacement for physical clinical examinations. Retinal telemedicine programs should be carefully and appropriately integrated into current clinical systems.
Collapse
Affiliation(s)
- Ru-ik Chee
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Dana Darwish
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | | | - Samir Patel
- Department of Ophthalmology, Wills Eye Hospital, Oregon Health & Science University, Portland, OR, United States
| | - Karyn Jonas
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute at Oregon Health & Science University, Portland, OR, United States
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute at Oregon Health & Science University, Portland, OR, United States
| | - Michael F. Chiang
- Department of Ophthalmology, Casey Eye Institute at Oregon Health & Science University, Portland, OR, United States
| | - RV Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| |
Collapse
|
136
|
Wang K, Jayadev C, Nittala MG, Velaga SB, Ramachandra CA, Bhaskaranand M, Bhat S, Solanki K, Sadda SR. Automated detection of diabetic retinopathy lesions on ultrawidefield pseudocolour images. Acta Ophthalmol 2018; 96:e168-e173. [PMID: 28926199 DOI: 10.1111/aos.13528] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Accepted: 06/15/2017] [Indexed: 01/04/2023]
Abstract
PURPOSE We examined the sensitivity and specificity of an automated algorithm for detecting referral-warranted diabetic retinopathy (DR) on Optos ultrawidefield (UWF) pseudocolour images. METHODS Patients with diabetes were recruited for UWF imaging. A total of 383 subjects (754 eyes) were enrolled. Nonproliferative DR graded to be moderate or higher on the 5-level International Clinical Diabetic Retinopathy (ICDR) severity scale was considered as grounds for referral. The software automatically detected DR lesions using the previously trained classifiers and classified each image in the test set as referral-warranted or not warranted. Sensitivity, specificity and the area under the receiver operating curve (AUROC) of the algorithm were computed. RESULTS The automated algorithm achieved a 91.7%/90.3% sensitivity (95% CI 90.1-93.9/80.4-89.4) with a 50.0%/53.6% specificity (95% CI 31.7-72.8/36.5-71.4) for detecting referral-warranted retinopathy at the patient/eye levels, respectively; the AUROC was 0.873/0.851 (95% CI 0.819-0.922/0.804-0.894). CONCLUSION Diabetic retinopathy (DR) lesions were detected from Optos pseudocolour UWF images using an automated algorithm. Images were classified as referral-warranted DR with a high degree of sensitivity and moderate specificity. Automated analysis of UWF images could be of value in DR screening programmes and could allow for more complete and accurate disease staging.
Collapse
Affiliation(s)
- Kang Wang
- Doheny Image Reading Center; Doheny Eye Institute; Los Angeles CA USA
- Department of Ophthalmology; Beijing Friendship Hospital; Capital Medical University; Beijing China
| | | | | | - Swetha B. Velaga
- Doheny Image Reading Center; Doheny Eye Institute; Los Angeles CA USA
| | | | | | | | | | - SriniVas R. Sadda
- Doheny Image Reading Center; Doheny Eye Institute; Los Angeles CA USA
- Department of Ophthalmology; David Geffen School of Medicine at UCLA; Los Angeles CA USA
| |
Collapse
|
137
|
Bawankar P, Shanbhag N, K. SS, Dhawan B, Palsule A, Kumar D, Chandel S, Sood S. Sensitivity and specificity of automated analysis of single-field non-mydriatic fundus photographs by Bosch DR Algorithm-Comparison with mydriatic fundus photography (ETDRS) for screening in undiagnosed diabetic retinopathy. PLoS One 2017; 12:e0189854. [PMID: 29281690 PMCID: PMC5744962 DOI: 10.1371/journal.pone.0189854] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 12/04/2017] [Indexed: 02/01/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.
Collapse
Affiliation(s)
| | - Nita Shanbhag
- Department of Ophthalmology, Dr. D.Y Patil Hospital & Research Centre, Mumbai, India
| | - S. Smitha K.
- KLES Dr. Prabhakar Kore Hospital & Research Centre, Belgavi, Karnataka, India
| | - Bodhraj Dhawan
- NKP Salve Institute of Medical Sciences and Research Center, Nagpur, Maharashtra, India
| | | | - Devesh Kumar
- Jeffrey Cheah School of Medicine and Health Sciences, Monash University Malaysia, Johor Bahru, Malaysia
| | | | - Suneet Sood
- Jeffrey Cheah School of Medicine and Health Sciences, Monash University Malaysia, Johor Bahru, Malaysia
- * E-mail:
| |
Collapse
|
138
|
Sears C, Tandias R, Arroyo J. Deep learning algorithm. Surv Ophthalmol 2017; 63:448-449. [PMID: 29248535 DOI: 10.1016/j.survophthal.2017.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
139
|
Ting DSW, Cheung CYL, Lim G, Tan GSW, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY, Wong EYM, Sabanayagam C, Baskaran M, Ibrahim F, Tan NC, Finkelstein EA, Lamoureux EL, Wong IY, Bressler NM, Sivaprasad S, Varma R, Jonas JB, He MG, Cheng CY, Cheung GCM, Aung T, Hsu W, Lee ML, Wong TY. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes. JAMA 2017; 318:2211-2223. [PMID: 29234807 PMCID: PMC5820739 DOI: 10.1001/jama.2017.18152] [Citation(s) in RCA: 1115] [Impact Index Per Article: 159.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
IMPORTANCE A deep learning system (DLS) is a machine learning technology with potential for screening diabetic retinopathy and related eye diseases. OBJECTIVE To evaluate the performance of a DLS in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, possible glaucoma, and age-related macular degeneration (AMD) in community and clinic-based multiethnic populations with diabetes. DESIGN, SETTING, AND PARTICIPANTS Diagnostic performance of a DLS for diabetic retinopathy and related eye diseases was evaluated using 494 661 retinal images. A DLS was trained for detecting diabetic retinopathy (using 76 370 images), possible glaucoma (125 189 images), and AMD (72 610 images), and performance of DLS was evaluated for detecting diabetic retinopathy (using 112 648 images), possible glaucoma (71 896 images), and AMD (35 948 images). Training of the DLS was completed in May 2016, and validation of the DLS was completed in May 2017 for detection of referable diabetic retinopathy (moderate nonproliferative diabetic retinopathy or worse) and vision-threatening diabetic retinopathy (severe nonproliferative diabetic retinopathy or worse) using a primary validation data set in the Singapore National Diabetic Retinopathy Screening Program and 10 multiethnic cohorts with diabetes. EXPOSURES Use of a deep learning system. MAIN OUTCOMES AND MEASURES Area under the receiver operating characteristic curve (AUC) and sensitivity and specificity of the DLS with professional graders (retinal specialists, general ophthalmologists, trained graders, or optometrists) as the reference standard. RESULTS In the primary validation dataset (n = 14 880 patients; 71 896 images; mean [SD] age, 60.2 [2.2] years; 54.6% men), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening diabetic retinopathy, 0.6%; possible glaucoma, 0.1%; and AMD, 2.5%. The AUC of the DLS for referable diabetic retinopathy was 0.936 (95% CI, 0.925-0.943), sensitivity was 90.5% (95% CI, 87.3%-93.0%), and specificity was 91.6% (95% CI, 91.0%-92.2%). For vision-threatening diabetic retinopathy, AUC was 0.958 (95% CI, 0.956-0.961), sensitivity was 100% (95% CI, 94.1%-100.0%), and specificity was 91.1% (95% CI, 90.7%-91.4%). For possible glaucoma, AUC was 0.942 (95% CI, 0.929-0.954), sensitivity was 96.4% (95% CI, 81.7%-99.9%), and specificity was 87.2% (95% CI, 86.8%-87.5%). For AMD, AUC was 0.931 (95% CI, 0.928-0.935), sensitivity was 93.2% (95% CI, 91.1%-99.8%), and specificity was 88.7% (95% CI, 88.3%-89.0%). For referable diabetic retinopathy in the 10 additional datasets, AUC range was 0.889 to 0.983 (n = 40 752 images). CONCLUSIONS AND RELEVANCE In this evaluation of retinal images from multiethnic cohorts of patients with diabetes, the DLS had high sensitivity and specificity for identifying diabetic retinopathy and related eye diseases. Further research is necessary to evaluate the applicability of the DLS in health care settings and the utility of the DLS to improve vision outcomes.
Collapse
Affiliation(s)
- Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Carol Yim-Lui Cheung
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Department of Ophthalmology and Visual Sciences, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Gilbert Lim
- School of Computing, National University of Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Nguyen D. Quang
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | - Alfred Gan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | | | - Ian Yew San Yeo
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Shu Yen Lee
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Edmund Yick Mun Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Mani Baskaran
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Farah Ibrahim
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Ngiap Chuan Tan
- Duke-NUS Medical School, National University of Singapore, Singapore
- SingHealth Polyclinic, Singapore Health Service, Singapore
| | - Eric A. Finkelstein
- Lien Center for Palliative Care, Health Services and Systems Research Program, Duke-NUS Graduate Medical School, Singapore
| | - Ecosse L. Lamoureux
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Ian Y. Wong
- Department of Ophthalmology, The University of Hong Kong, Hong Kong SAR, China
| | | | - Sobha Sivaprasad
- Moorfields Eye Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Rohit Varma
- University of Southern California Gayle and Edward Roski Eye Institute, Los Angeles, California
| | - Jost B. Jonas
- Department of Ophthalmology, Ruprecht-Karls University of Heidelberg, Heidelberg, Germany
| | - Ming Guang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yatsen University, Guangzhou, China
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore
| | - Mong Li Lee
- School of Computing, National University of Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| |
Collapse
|
140
|
Combination of Gray-Level and Moment Invariant for Automatic Blood Vessel Detection on Retinal Image. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2017. [DOI: 10.4028/www.scientific.net/jbbbe.34.10] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Segmentation of blood vessels in the retinal is a crucial step in the diagnosis of eye diseases such as diabetic retinopathy and glaucoma. This paper presents a supervised method for automatic segmentation of blood vessels in retinal images. The proposed method based on a hybrid combination between Gray-Level and Moment Invariant techniques. There are four steps involved, whereas preprocessing, feature extraction, classification, and post-processing. In the preprocessing, three stages are performed include vessel central light reflex removal, background homogenization, and vessel enhancement. The 7-D vector feature extraction was performed to compute that compose of gray-level and moment invariants-based features for pixel representation. The decision tree is used for classification step that characterized the pixel based on vessels and non-vessels. The final step is the post-processing which will remove the small artifacts appears after classification process. The proposed method was compared to the Vascular Tree method and Morphological method. Based on the objective evaluation, the proposed method achieved (sensitivity = 98.589, specificity = 55.544 and accuracy = 96.197).
Collapse
|
141
|
Abstract
PURPOSE OF REVIEW As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. RECENT FINDINGS Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.
Collapse
Affiliation(s)
- Lucy I Mudie
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N. Wolfe St. Maumenee 711, Baltimore, MD, 21281, USA
| | - Xueyang Wang
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N. Wolfe St. Maumenee 711, Baltimore, MD, 21281, USA
| | - David S Friedman
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N. Wolfe St. Maumenee 711, Baltimore, MD, 21281, USA
| | - Christopher J Brady
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 600 N. Wolfe St. Maumenee 711, Baltimore, MD, 21281, USA.
| |
Collapse
|