1
|
Nakayama LF, Matos J, Quion J, Novaes F, Mitchell WG, Mwavu R, Hung CJYJ, Santiago APD, Phanphruk W, Cardoso JS, Celi LA. Unmasking biases and navigating pitfalls in the ophthalmic artificial intelligence lifecycle: A narrative review. PLOS DIGITAL HEALTH 2024; 3:e0000618. [PMID: 39378192 PMCID: PMC11460710 DOI: 10.1371/journal.pdig.0000618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
Over the past 2 decades, exponential growth in data availability, computational power, and newly available modeling techniques has led to an expansion in interest, investment, and research in Artificial Intelligence (AI) applications. Ophthalmology is one of many fields that seek to benefit from AI given the advent of telemedicine screening programs and the use of ancillary imaging. However, before AI can be widely deployed, further work must be done to avoid the pitfalls within the AI lifecycle. This review article breaks down the AI lifecycle into seven steps-data collection; defining the model task; data preprocessing and labeling; model development; model evaluation and validation; deployment; and finally, post-deployment evaluation, monitoring, and system recalibration-and delves into the risks for harm at each step and strategies for mitigating them.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Sao Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering (FEUP), University of Porto, Porto, Portugal
- Institute for Systems and Computer Engineering (INESC TEC), Technology and Science, Porto, Portugal
| | - Justin Quion
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Frederico Novaes
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Sao Paulo, Brazil
| | | | - Rogers Mwavu
- Department of Information Technology, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Claudia Ju-Yi Ji Hung
- Department of Ophthalmology, Byers Eye Institute at Stanford, California, United States of America
- Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
| | - Alvina Pauline Dy Santiago
- University of the Philippines Manila College of Medicine, Manila, Philippines
- Division of Pediatric Ophthalmology, Department of Ophthalmology & Visual Sciences, Philippine General Hospital, Manila, Philippines
- Section of Pediatric Ophthalmology, Eye and Vision Institute, The Medical City, Pasig, Philippines
- Section of Pediatric Ophthalmology, International Eye and Institute, St. Luke’s Medical Center, Quezon City, Philippines
| | - Warachaya Phanphruk
- Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand
| | - Jaime S. Cardoso
- Faculty of Engineering (FEUP), University of Porto, Porto, Portugal
- Institute for Systems and Computer Engineering (INESC TEC), Technology and Science, Porto, Portugal
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, Massachusetts, United States of America
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| |
Collapse
|
2
|
Chakravarty A, Emre T, Leingang O, Riedl S, Mai J, Scholl HP, Sivaprasad S, Rueckert D, Lotery A, Schmidt-Erfurth U, Bogunović H. Morph-SSL: Self-Supervision With Longitudinal Morphing for Forecasting AMD Progression From OCT Volumes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3224-3239. [PMID: 38635383 PMCID: PMC7616690 DOI: 10.1109/tmi.2024.3390940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
The lack of reliable biomarkers makes predicting the conversion from intermediate to neovascular age-related macular degeneration (iAMD, nAMD) a challenging task. We develop a Deep Learning (DL) model to predict the future risk of conversion of an eye from iAMD to nAMD from its current OCT scan. Although eye clinics generate vast amounts of longitudinal OCT scans to monitor AMD progression, only a small subset can be manually labeled for supervised DL. To address this issue, we propose Morph-SSL, a novel Self-supervised Learning (SSL) method for longitudinal data. It uses pairs of unlabelled OCT scans from different visits and involves morphing the scan from the previous visit to the next. The Decoder predicts the transformation for morphing and ensures a smooth feature manifold that can generate intermediate scans between visits through linear interpolation. Next, the Morph-SSL trained features are input to a Classifier which is trained in a supervised manner to model the cumulative probability distribution of the time to conversion with a sigmoidal function. Morph-SSL was trained on unlabelled scans of 399 eyes (3570 visits). The Classifier was evaluated with a five-fold cross-validation on 2418 scans from 343 eyes with clinical labels of the conversion date. The Morph-SSL features achieved an AUC of 0.779 in predicting the conversion to nAMD within the next 6 months, outperforming the same network when trained end-to-end from scratch or pre-trained with popular SSL methods. Automated prediction of the future risk of nAMD onset can enable timely treatment and individualized AMD management.
Collapse
Affiliation(s)
- Arunava Chakravarty
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Taha Emre
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Oliver Leingang
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Julia Mai
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Hendrik P.N. Scholl
- Institute of Molecular and Clinical Ophthalmology Basel, 4031Basel, Switzerland, and also with the Department of Ophthalmology, University of Basel, 4001Basel, Switzerland
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, EC1V 2PDLondon, U.K.
| | - Daniel Rueckert
- BioMedIA, Imperial College London, SW7 2AZLondon, U.K.; Institute for AI and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, 80333Munich, Germany
| | - Andrew Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, SO17 1BJSouthampton, U.K.
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, 1090Vienna, Austria
| | - Hrvoje Bogunović
- Department of Ophthalmology and Optometry and the Christian Doppler Laboratory for Artificial Intelligence in Retina, Medical University of Vienna, 1090Vienna, Austria
| |
Collapse
|
3
|
Talcott KE, Baxter SL, Chen DK, Korot E, Lee A, Kim JE, Modi Y, Moshfeghi DM, Singh RP. American Society of Retina Specialists Artificial Intelligence Task Force Report. JOURNAL OF VITREORETINAL DISEASES 2024; 8:373-380. [PMID: 39148579 PMCID: PMC11323512 DOI: 10.1177/24741264241247602] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Since the Artificial Intelligence Committee of the American Society of Retina Specialists developed the initial task force report in 2020, the artificial intelligence (AI) field has seen further adoption of US Food and Drug Administration-approved AI platforms and significant development of AI for various retinal conditions. With expansion of this technology comes further areas of challenges, including the data sources used in AI, the democracy of AI, commercialization, bias, and the need for provider education on the technology of AI. The overall focus of this committee report is to explore these recent issues as they relate to the continued development of AI and its integration into ophthalmology and retinal practice.
Collapse
Affiliation(s)
- Katherine E. Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, USA
| | - Dinah K. Chen
- Department of Ophthalmology, NYU Grossman School of Medicine, New York University, NY, USA
- Genentech/Roche, South San Francisco, CA, USA
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Horngren Family Vitreoretinal Center, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Aaron Lee
- Roger and Angie Karalis Johnson Retina Center, Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Judy E. Kim
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Yasha Modi
- Department of Ophthalmology, NYU Grossman School of Medicine, New York University, NY, USA
| | - Darius M. Moshfeghi
- Horngren Family Vitreoretinal Center, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Rishi P. Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
- Cleveland Clinic Martin Health, Stuart, FL, USA
| |
Collapse
|
4
|
Rojas-Carabali W, Cifuentes-González C, Gutierrez-Sinisterra L, Heng LY, Tsui E, Gangaputra S, Sadda S, Nguyen QD, Kempen JH, Pavesio CE, Gupta V, Raman R, Miao C, Lee B, de-la-Torre A, Agrawal R. Managing a patient with uveitis in the era of artificial intelligence: Current approaches, emerging trends, and future perspectives. Asia Pac J Ophthalmol (Phila) 2024; 13:100082. [PMID: 39019261 DOI: 10.1016/j.apjo.2024.100082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 06/30/2024] [Accepted: 07/04/2024] [Indexed: 07/19/2024] Open
Abstract
The integration of artificial intelligence (AI) with healthcare has opened new avenues for diagnosing, treating, and managing medical conditions with remarkable precision. Uveitis, a diverse group of rare eye conditions characterized by inflammation of the uveal tract, exemplifies the complexities in ophthalmology due to its varied causes, clinical presentations, and responses to treatments. Uveitis, if not managed promptly and effectively, can lead to significant visual impairment. However, its management requires specialized knowledge, which is often lacking, particularly in regions with limited access to health services. AI's capabilities in pattern recognition, data analysis, and predictive modelling offer significant potential to revolutionize uveitis management. AI can classify disease etiologies, analyze multimodal imaging data, predict outcomes, and identify new therapeutic targets. However, transforming these AI models into clinical applications and meeting patient expectations involves overcoming challenges like acquiring extensive, annotated datasets, ensuring algorithmic transparency, and validating these models in real-world settings. This review delves into the complexities of uveitis and the current AI landscape, discussing the development, opportunities, and challenges of AI from theoretical models to bedside application. It also examines the epidemiology of uveitis, the global shortage of uveitis specialists, and the disease's socioeconomic impacts, underlining the critical need for AI-driven approaches. Furthermore, it explores the integration of AI in diagnostic imaging and future directions in ophthalmology, aiming to highlight emerging trends that could transform management of a patient with uveitis and suggesting collaborative efforts to enhance AI applications in clinical practice.
Collapse
Affiliation(s)
- William Rojas-Carabali
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore.
| | - Carlos Cifuentes-González
- Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore.
| | - Laura Gutierrez-Sinisterra
- Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore.
| | - Lim Yuan Heng
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
| | - Edmund Tsui
- Stein Eye Institute, David Geffen of Medicine at UCLA, Los Angeles, CA, USA.
| | - Sapna Gangaputra
- Vanderbilt Eye Institute, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Srinivas Sadda
- Doheny Eye Institute, David Geffen of Medicine at UCLA, Los Angeles, CA, USA.
| | | | - John H Kempen
- Department of Ophthalmology, Massachusetts Eye and Ear/Harvard Medical School; and Schepens Eye Research Institute; Boston, MA, USA; Department of Ophthalmology, Myungsung Medical College/MCM Comprehensive Specialized Hospital, Addis Abeba, Ethiopia; Sight for Souls, Bellevue, WA, USA.
| | | | - Vishali Gupta
- Advanced Eye Centre, Post, graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India.
| | - Rajiv Raman
- Department of Ophthalmology, Sankara Nethralaya, Chennai, India.
| | - Chunyan Miao
- School of Computer Science and Engineering at Nanyang Technological University, Singapore.
| | - Bernett Lee
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
| | - Alejandra de-la-Torre
- Neuroscience Research Group (NEUROS), Neurovitae Center for Neuroscience, Institute of Translational Medicine (IMT), Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario, Bogotá, Colombia.
| | - Rupesh Agrawal
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Ophthalmology, Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore; Singapore Eye Research Institute, Singapore; Duke NUS Medical School, Singapore.
| |
Collapse
|
5
|
Crincoli E, Sacconi R, Querques L, Querques G. Artificial intelligence in age-related macular degeneration: state of the art and recent updates. BMC Ophthalmol 2024; 24:121. [PMID: 38491380 PMCID: PMC10943791 DOI: 10.1186/s12886-024-03381-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024] Open
Abstract
Age related macular degeneration (AMD) represents a leading cause of vision loss and it is expected to affect 288 million people by 2040. During the last decade, machine learning technologies have shown great potential to revolutionize clinical management of AMD and support research for a better understanding of the disease. The aim of this review is to provide a panoramic description of all the applications of AI to AMD management and screening that have been analyzed in recent past literature. Deep learning (DL) can be effectively used to diagnose AMD, to predict short term risk of exudation and need for injections within the next 2 years. Moreover, DL technology has the potential to customize anti-VEGF treatment choice with a higher accuracy than expert human experts. In addition, accurate prediction of VA response to treatment can be provided to the patients with the use of ML models, which could considerably increase patients' compliance to treatment in favorable cases. Lastly, AI, especially in the form of DL, can effectively predict conversion to GA in 12 months and also suggest new biomarkers of conversion with an innovative reverse engineering approach.
Collapse
Affiliation(s)
- Emanuele Crincoli
- Ophthalmology Unit, "Fondazione Policlinico Universitario A. Gemelli IRCCS", Rome, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Lea Querques
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Giuseppe Querques
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy.
| |
Collapse
|
6
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
7
|
Cheung R, Trinh M, Tee YG, Nivison-Smith L. RPE Curvature Can Screen for Early and Intermediate AMD. Invest Ophthalmol Vis Sci 2024; 65:2. [PMID: 38300558 PMCID: PMC10846343 DOI: 10.1167/iovs.65.2.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/09/2024] [Indexed: 02/02/2024] Open
Abstract
Purpose Diagnosing AMD early optimizes clinical management. However, current diagnostic accuracy is limited by the subjectivity of qualitative diagnostic measures used in clinical practice. This study tests if RPE curvature could be an accurate, quantitative measure for AMD diagnosis. Methods Consecutive patients without AMD or normal aging changes (n = 111), with normal aging changes (n = 107), early AMD (n = 102) and intermediate AMD (n = 114) were recruited. RPE curvature was calculated based on the sinuosity method of measuring river curvature in environmental science. RPE and Bruch's membrane were manually segmented from optical coherence tomography B-scans and then their lengths automatically extracted using customized MATLAB code. RPE sinuosity was calculated as a ratio of RPE to Bruch's membrane length. Diagnostic accuracy was determined from area under the receiver operator characteristic curve (aROC). Results RPE sinuosity of foveal B-scans could distinguish any eyes with AMD (early or intermediate) from those without AMD (non-AMD or eyes with normal aging changes) with acceptable diagnostic accuracy (aROC = 0.775). Similarly, RPE sinuosity could identify intermediate AMD from all other groups (aROC = 0.871) and distinguish between early and intermediate AMD (aROC = 0.737). RPE sinuosity was significantly associated with known AMD lesions: reticular pseudodrusen (P < 0.0001) and drusen volume (P < 0.0001), but not physiological variables such as age, sex, and ethnicity. Conclusions RPE sinuosity is a simple, robust, quantitative biomarker that is amenable to automation and could enhance screening of AMD.
Collapse
Affiliation(s)
- Rene Cheung
- School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
- Centre for Eye Health, University of New South Wales, Sydney, Australia
| | - Matt Trinh
- School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
- Centre for Eye Health, University of New South Wales, Sydney, Australia
| | - Yoh Ghen Tee
- School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
- Centre for Eye Health, University of New South Wales, Sydney, Australia
| | - Lisa Nivison-Smith
- School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
- Centre for Eye Health, University of New South Wales, Sydney, Australia
| |
Collapse
|
8
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
9
|
Talcott KE, Valentim CCS, Perkins SW, Ren H, Manivannan N, Zhang Q, Bagherinia H, Lee G, Yu S, D'Souza N, Jarugula H, Patel K, Singh RP. Automated Detection of Abnormal Optical Coherence Tomography B-scans Using a Deep Learning Artificial Intelligence Neural Network Platform. Int Ophthalmol Clin 2024; 64:115-127. [PMID: 38146885 DOI: 10.1097/iio.0000000000000519] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
|
10
|
Dow ER, Jeong HK, Katz EA, Toth CA, Wang D, Lee T, Kuo D, Allingham MJ, Hadziahmetovic M, Mettu PS, Schuman S, Carin L, Keane PA, Henao R, Lad EM. A Deep-Learning Algorithm to Predict Short-Term Progression to Geographic Atrophy on Spectral-Domain Optical Coherence Tomography. JAMA Ophthalmol 2023; 141:1052-1061. [PMID: 37856139 PMCID: PMC10587827 DOI: 10.1001/jamaophthalmol.2023.4659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance The identification of patients at risk of progressing from intermediate age-related macular degeneration (iAMD) to geographic atrophy (GA) is essential for clinical trials aimed at preventing disease progression. DeepGAze is a fully automated and accurate convolutional neural network-based deep learning algorithm for predicting progression from iAMD to GA within 1 year from spectral-domain optical coherence tomography (SD-OCT) scans. Objective To develop a deep-learning algorithm based on volumetric SD-OCT scans to predict the progression from iAMD to GA during the year following the scan. Design, Setting, and Participants This retrospective cohort study included participants with iAMD at baseline and who either progressed or did not progress to GA within the subsequent 13 months. Participants were included from centers in 4 US states. Data set 1 included patients from the Age-Related Eye Disease Study 2 AREDS2 (Ancillary Spectral-Domain Optical Coherence Tomography) A2A study (July 2008 to August 2015). Data sets 2 and 3 included patients with imaging taken in routine clinical care at a tertiary referral center and associated satellites between January 2013 and January 2023. The stored imaging data were retrieved for the purpose of this study from July 1, 2022, to February 1, 2023. Data were analyzed from May 2021 to July 2023. Exposure A position-aware convolutional neural network with proactive pseudointervention was trained and cross-validated on Bioptigen SD-OCT volumes (data set 1) and validated on 2 external data sets comprising Heidelberg Spectralis SD-OCT scans (data sets 2 and 3). Main Outcomes and Measures Prediction of progression to GA within 13 months was evaluated with area under the receiver-operator characteristic curves (AUROC) as well as area under the precision-recall curve (AUPRC), sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. Results The study included a total of 417 patients: 316 in data set 1 (mean [SD] age, 74 [8]; 185 [59%] female), 53 in data set 2, (mean [SD] age, 83 [8]; 32 [60%] female), and 48 in data set 3 (mean [SD] age, 81 [8]; 32 [67%] female). The AUROC for prediction of progression from iAMD to GA within 1 year was 0.94 (95% CI, 0.92-0.95; AUPRC, 0.90 [95% CI, 0.85-0.95]; sensitivity, 0.88 [95% CI, 0.84-0.92]; specificity, 0.90 [95% CI, 0.87-0.92]) for data set 1. The addition of expert-annotated SD-OCT features to the model resulted in no improvement compared to the fully autonomous model (AUROC, 0.95; 95% CI, 0.92-0.95; P = .19). On an independent validation data set (data set 2), the model predicted progression to GA with an AUROC of 0.94 (95% CI, 0.91-0.96; AUPRC, 0.92 [0.89-0.94]; sensitivity, 0.91 [95% CI, 0.74-0.98]; specificity, 0.80 [95% CI, 0.63-0.91]). At a high-specificity operating point, simulated clinical trial recruitment was enriched for patients progressing to GA within 1 year by 8.3- to 20.7-fold (data sets 2 and 3). Conclusions and Relevance The fully automated, position-aware deep-learning algorithm assessed in this study successfully predicted progression from iAMD to GA over a clinically meaningful time frame. The ability to predict imminent GA progression could facilitate clinical trials aimed at preventing the condition and could guide clinical decision-making regarding screening frequency or treatment initiation.
Collapse
Affiliation(s)
- Eliot R. Dow
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Hyeon Ki Jeong
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| | - Ella Arnon Katz
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Cynthia A. Toth
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Dong Wang
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Terry Lee
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - David Kuo
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Michael J. Allingham
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Majda Hadziahmetovic
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Priyatham S. Mettu
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Stefanie Schuman
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Pearse A. Keane
- University College London Institute of Ophthalmology, National Institute for Health and Care Research, Biomedical Research Centre, Moorfields Eye Hospital National Health Services Foundation Trust, London, United Kingdom
| | - Ricardo Henao
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Eleonora M. Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
11
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
12
|
Paul W, Burlina P, Mocharla R, Joshi N, Li Z, Gu S, Nanegrungsunk O, Lin K, Bressler SB, Cai CX, Kong J, Liu TYA, Moini H, Du W, Amer F, Chu K, Vitti R, Sepehrband F, Bressler NM. Accuracy of Artificial Intelligence in Estimating Best-Corrected Visual Acuity From Fundus Photographs in Eyes With Diabetic Macular Edema. JAMA Ophthalmol 2023; 141:677-685. [PMID: 37289463 PMCID: PMC10251243 DOI: 10.1001/jamaophthalmol.2023.2271] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/17/2023] [Indexed: 06/09/2023]
Abstract
Importance Best-corrected visual acuity (BCVA) is a measure used to manage diabetic macular edema (DME), sometimes suggesting development of DME or consideration of initiating, repeating, withholding, or resuming treatment with anti-vascular endothelial growth factor. Using artificial intelligence (AI) to estimate BCVA from fundus images could help clinicians manage DME by reducing the personnel needed for refraction, the time presently required for assessing BCVA, or even the number of office visits if imaged remotely. Objective To evaluate the potential application of AI techniques for estimating BCVA from fundus photographs with and without ancillary information. Design, Setting, and Participants Deidentified color fundus images taken after dilation were used post hoc to train AI systems to perform regression from image to BCVA and to evaluate resultant estimation errors. Participants were patients enrolled in the VISTA randomized clinical trial through 148 weeks wherein the study eye was treated with aflibercept or laser. The data from study participants included macular images, clinical information, and BCVA scores by trained examiners following protocol refraction and VA measurement on Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Main Outcomes Primary outcome was regression evaluated by mean absolute error (MAE); the secondary outcome included percentage of predictions within 10 letters, computed over the entire cohort as well as over subsets categorized by baseline BCVA, determined from baseline through the 148-week visit. Results Analysis included 7185 macular color fundus images of the study and fellow eyes from 459 participants. Overall, the mean (SD) age was 62.2 (9.8) years, and 250 (54.5%) were male. The baseline BCVA score for the study eyes ranged from 73 to 24 letters (approximate Snellen equivalent 20/40 to 20/320). Using ResNet50 architecture, the MAE for the testing set (n = 641 images) was 9.66 (95% CI, 9.05-10.28); 33% of the values (95% CI, 30%-37%) were within 0 to 5 letters and 28% (95% CI, 25%-32%) within 6 to 10 letters. For BCVA of 100 letters or less but more than 80 letters (20/10 to 20/25, n = 161) and 80 letters or less but more than 55 letters (20/32 to 20/80, n = 309), the MAE was 8.84 letters (95% CI, 7.88-9.81) and 7.91 letters (95% CI, 7.28-8.53), respectively. Conclusions and Relevance This investigation suggests AI can estimate BCVA directly from fundus photographs in patients with DME, without refraction or subjective visual acuity measurements, often within 1 to 2 lines on an ETDRS chart, supporting this AI concept if additional improvements in estimates can be achieved.
Collapse
Affiliation(s)
- William Paul
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Philippe Burlina
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
- Department of Computer Science and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
- Zoox, Foster City, California
| | - Rohita Mocharla
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Neil Joshi
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Zhuolin Li
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Sophie Gu
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York–Presbyterian Hospital, New York, New York
| | - Onnisa Nanegrungsunk
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Kira Lin
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Ruiz Department of Ophthalmology and Visual Science at McGovern Medical School at UTHealth Houston, Houston, Texas
| | - Susan B. Bressler
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Cindy X. Cai
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jun Kong
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - T. Y. Alvin Liu
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Hadi Moini
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Weiming Du
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Fouad Amer
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Karen Chu
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Robert Vitti
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | | | - Neil M. Bressler
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Editor, JAMA Ophthalmology
| |
Collapse
|
13
|
Ruamviboonsuk P, Lai TYY, Chen SJ, Yanagi Y, Wong TY, Chen Y, Gemmy Cheung CM, Teo KYC, Sadda S, Gomi F, Chaikitmongkol V, Chang A, Lee WK, Kokame G, Koh A, Guymer R, Lai CC, Kim JE, Ogura Y, Chainakul M, Arjkongharn N, Hong Chan H, Lam DSC. Polypoidal Choroidal Vasculopathy: Updates on Risk Factors, Diagnosis, and Treatments. Asia Pac J Ophthalmol (Phila) 2023; 12:184-195. [PMID: 36728294 DOI: 10.1097/apo.0000000000000573] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/09/2022] [Indexed: 02/03/2023] Open
Abstract
There have been recent advances in basic research and clinical studies in polypoidal choroidal vasculopathy (PCV). A recent, large-scale, population-based study found systemic factors, such as male gender and smoking, were associated with PCV, and a recent systematic review reported plasma C-reactive protein, a systemic biomarker, was associated with PCV. Growing evidence points to an association between pachydrusen, recently proposed extracellular deposits associated with the thick choroid, and the risk of development of PCV. Many recent studies on diagnosis of PCV have focused on applying criteria from noninvasive multimodal retinal imaging without requirement of indocyanine green angiography. There have been attempts to develop deep learning models, a recent subset of artificial intelligence, for detecting PCV from different types of retinal imaging modality. Some of these deep learning models were found to have high performance when they were trained and tested on color retinal images with corresponding images from optical coherence tomography. The treatment of PCV is either a combination therapy using verteporfin photodynamic therapy and anti-vascular endothelial growth factor (VEGF), or anti-VEGF monotherapy, often used with a treat-and-extend regimen. New anti-VEGF agents may provide more durable treatment with similar efficacy, compared with existing anti-VEGF agents. It is not known if they can induce greater closure of polypoidal lesions, in which case, combination therapy may still be a mainstay. Recent evidence supports long-term follow-up of patients with PCV after treatment for early detection of recurrence, particularly in patients with incomplete closure of polypoidal lesions.
Collapse
Affiliation(s)
| | - Timothy Y Y Lai
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yasuo Yanagi
- Department of Ophthalmology and Microtechnology, Yokohama City University, Yokohama, Japan
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Chui Ming Gemmy Cheung
- Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Kelvin Y C Teo
- Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Srinivas Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA
| | - Fumi Gomi
- Department of Ophthalmology, Hyogo Medical University, Hyogo, Japan
| | - Voraporn Chaikitmongkol
- Retina Division, Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Andrew Chang
- Sydney Retina Clinic, Sydney Eye Hospital, University of Sydney, Sydney, NSW, Australia
| | | | - Gregg Kokame
- Division of Ophthalmology, Department of Surgery, University of Hawaii School of Medicine, Honolulu, HI
| | - Adrian Koh
- Eye & Retina Surgeons, Camden Medical Centre, Singapore, Singapore
| | - Robyn Guymer
- Centre for Eye Research Australia, University of Melbourne, The Royal Victorian Eye and Ear Hospital, Melbourne, Australia
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Keelung, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Judy E Kim
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - Yuichiro Ogura
- Graduate School of Medical Sciences, Nagoya City University, Nagoya, Japan
| | | | | | | | - Dennis S C Lam
- The C-MER International Eye Research Center of The Chinese University of Hong Kong (Shenzhen), Shenzhen, China
- The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China
| |
Collapse
|
14
|
Xie L, Vaghefi E, Yang S, Han D, Marshall J, Squirrell D. Automation of Macular Degeneration Classification in the AREDS Dataset, Using a Novel Neural Network Design. Clin Ophthalmol 2023; 17:455-469. [PMID: 36755888 PMCID: PMC9901462 DOI: 10.2147/opth.s396537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 01/12/2023] [Indexed: 02/04/2023] Open
Abstract
Purpose To create an ensemble of Convolutional Neural Networks (CNNs), capable of detecting and stratifying the risk of progressive age-related macular degeneration (AMD) from retinal photographs. Design Retrospective cohort study. Methods Three individual CNNs are trained to accurately detect 1) advanced AMD, 2) drusen size and 3) the presence or otherwise of pigmentary abnormalities, from macular centered retinal images were developed. The CNNs were then arranged in a "cascading" architecture to calculate the Age-related Eye Disease Study (AREDS) Simplified 5-level risk Severity score (Risk Score 0 - Risk Score 4), for test images. The process was repeated creating a simplified binary "low risk" (Scores 0-2) and "high risk" (Risk Score 3-4) classification. Participants There were a total of 188,006 images, of which 118,254 images were deemed gradable, representing 4591 patients, from the AREDS1 dataset. The gradable images were split into 50%/25%/25% ratios for training, validation and test purposes. Main Outcome Measures The ability of the ensemble of CNNs using retinal images to predict an individual's risk of experiencing progression of their AMD based on the AREDS 5-step Simplified Severity Scale. Results When assessed against the 5-step Simplified Severity Scale, the results generated by the ensemble of CNN's achieved an accuracy of 80.43% (quadratic kappa 0.870). When assessed against a simplified binary (Low Risk/High Risk) classification, an accuracy of 98.08%, sensitivity of ≥85% and specificity of ≥99% was achieved. Conclusion We have created an ensemble of neural networks, trained on the AREDS 1 dataset, that is able to accurately calculate an individual's score on the AREDS 5-step Simplified Severity Scale for AMD. If the results presented were replicated, then this ensemble of CNNs could be used as a screening tool that has the potential to significantly improve health outcomes by identifying asymptomatic individuals who would benefit from AREDS2 macular supplements.
Collapse
Affiliation(s)
- Li Xie
- Toku Eyes Limited, Auckland, New Zealand
| | - Ehsan Vaghefi
- Toku Eyes Limited, Auckland, New Zealand,School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand,Correspondence: Ehsan Vaghefi, Tel +6493737599, Email
| | - Song Yang
- Toku Eyes Limited, Auckland, New Zealand
| | - David Han
- Toku Eyes Limited, Auckland, New Zealand,School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
| | | | - David Squirrell
- Toku Eyes Limited, Auckland, New Zealand,Department of Ophthalmology, Auckland District Health Board, Auckland, New Zealand
| |
Collapse
|
15
|
Morano J, Hervella ÁS, Rouco J, Novo J, Fernández-Vigo JI, Ortega M. Weakly-supervised detection of AMD-related lesions in color fundus images using explainable deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107296. [PMID: 36481530 DOI: 10.1016/j.cmpb.2022.107296] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/16/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. METHODS In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. RESULTS The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. CONCLUSIONS The proposed approach provides meaningful information-lesion identification and lesion activation maps-that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.
Collapse
Affiliation(s)
- José Morano
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José I Fernández-Vigo
- Department of Ophthalmology, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria (IdISSC), Madrid, Spain; Department of Ophthalmology, Centro Internacional de Oftalmología Avanzada, Madrid, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
16
|
Lee SH, Lee S, Lee J, Lee JK, Moon NJ. Effective encoder-decoder neural network for segmentation of orbital tissue in computed tomography images of Graves' orbitopathy patients. PLoS One 2023; 18:e0285488. [PMID: 37163543 PMCID: PMC10171592 DOI: 10.1371/journal.pone.0285488] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023] Open
Abstract
PURPOSE To propose a neural network (NN) that can effectively segment orbital tissue in computed tomography (CT) images of Graves' orbitopathy (GO) patients. METHODS We analyzed orbital CT scans from 701 GO patients diagnosed between 2010 and 2019 and devised an effective NN specializing in semantic orbital tissue segmentation in GO patients' CT images. After four conventional (Attention U-Net, DeepLab V3+, SegNet, and HarDNet-MSEG) and the proposed NN train the various manual orbital tissue segmentations, we calculated the Dice coefficient and Intersection over Union for comparison. RESULTS CT images of the eyeball, four rectus muscles, the optic nerve, and the lacrimal gland tissues from all 701 patients were analyzed in this study. In the axial image with the largest eyeball area, the proposed NN achieved the best performance, with Dice coefficients of 98.2% for the eyeball, 94.1% for the optic nerve, 93.0% for the medial rectus muscle, and 91.1% for the lateral rectus muscle. The proposed NN also gave the best performance for the coronal image. Our qualitative analysis demonstrated that the proposed NN outputs provided more sophisticated orbital tissue segmentations for GO patients than the conventional NNs. CONCLUSION We concluded that our proposed NN exhibited an improved CT image segmentation for GO patients over conventional NNs designed for semantic segmentation tasks.
Collapse
Affiliation(s)
- Seung Hyeun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| |
Collapse
|
17
|
Shiihara H, Sonoda S, Terasaki H, Fujiwara K, Funatsu R, Shiba Y, Kumagai Y, Honda N, Sakamoto T. Wayfinding artificial intelligence to detect clinically meaningful spots of retinal diseases: Artificial intelligence to help retina specialists in real world practice. PLoS One 2023; 18:e0283214. [PMID: 36972243 PMCID: PMC10042340 DOI: 10.1371/journal.pone.0283214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 02/20/2023] [Indexed: 03/29/2023] Open
Abstract
AIM/BACKGROUND To aim of this study is to develop an artificial intelligence (AI) that aids in the thought process by providing retinal clinicians with clinically meaningful or abnormal findings rather than just a final diagnosis, i.e., a "wayfinding AI." METHODS Spectral domain optical coherence tomography B-scan images were classified into 189 normal and 111 diseased eyes. These were automatically segmented using a deep-learning based boundary-layer detection model. During segmentation, the AI model calculates the probability of the boundary surface of the layer for each A-scan. If this probability distribution is not biased toward a single point, layer detection is defined as ambiguous. This ambiguity was calculated using entropy, and a value referred to as the ambiguity index was calculated for each OCT image. The ability of the ambiguity index to classify normal and diseased images and the presence or absence of abnormalities in each layer of the retina were evaluated based on the area under the curve (AUC). A heatmap, i.e., an ambiguity-map, of each layer, that changes the color according to the ambiguity index value, was also created. RESULTS The ambiguity index of the overall retina of the normal and disease-affected images (mean ± SD) were 1.76 ± 0.10 and 2.06 ± 0.22, respectively, with a significant difference (p < 0.05). The AUC used to distinguish normal and disease-affected images using the ambiguity index was 0.93, and was 0.588 for the internal limiting membrane boundary, 0.902 for the nerve fiber layer/ganglion cell layer boundary, 0.920 for the inner plexiform layer/inner nuclear layer boundary, 0.882 for the outer plexiform layer/outer nuclear layer boundary, 0.926 for the ellipsoid zone line, and 0.866 for the retinal pigment epithelium/Bruch's membrane boundary. Three representative cases reveal the usefulness of an ambiguity map. CONCLUSIONS The present AI algorithm can pinpoint abnormal retinal lesions in OCT images, and its localization is known at a glance when using an ambiguity map. This will help diagnose the processes of clinicians as a wayfinding tool.
Collapse
Affiliation(s)
- Hideki Shiihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
- Sonoda Eye Clinic, Kagoshima, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Kazuki Fujiwara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Ryoh Funatsu
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | | | | | | | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| |
Collapse
|
18
|
Ganjdanesh A, Zhang J, Yan S, Chen W, Huang H. Multimodal Genotype and Phenotype Data Integration to Improve Partial Data-Based Longitudinal Prediction. J Comput Biol 2022; 29:1324-1345. [PMID: 36383766 PMCID: PMC9835299 DOI: 10.1089/cmb.2022.0378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Multimodal data analysis has attracted ever-increasing attention in computational biology and bioinformatics community recently. However, existing multimodal learning approaches need all data modalities available at both training and prediction stages, thus they cannot be applied to many real-world biomedical applications, which often have a missing modality problem as the collection of all modalities is prohibitively costly. Meanwhile, two diagnosis-related pieces of information are of main interest during the examination of a subject regarding a chronic disease (with longitudinal progression): their current status (diagnosis) and how it will change before next visit (longitudinal outcome). Correct responses to these queries can identify susceptible individuals and provide the means of early interventions for them. In this article, we develop a novel adversarial mutual learning framework for longitudinal disease progression prediction, allowing us to leverage multiple data modalities available for training to train a performant model that uses a single modality for prediction. Specifically, in our framework, a single-modal model (which utilizes the main modality) learns from a pretrained multimodal model (which accepts both main and auxiliary modalities as input) in a mutual learning manner to (1) infer outcome-related representations of the auxiliary modalities based on its own representations for the main modality during adversarial training and (2) successfully combine them to predict the longitudinal outcome. We apply our method to analyze the retinal imaging genetics for the early diagnosis of age-related macular degeneration (AMD) disease, that is, simultaneous assessment of the severity of AMD at the time of the current visit and the prognosis of the condition at the subsequent visit. Our experiments using the Age-Related Eye Disease Study dataset show that our method is more effective than baselines at classifying patients' current and forecasting their future AMD severity.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Jipeng Zhang
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Sarah Yan
- West Windsor-Plainsboro High School South, Princeton Junction, New Jersey, USA
| | - Wei Chen
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Human Genetics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
19
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
20
|
Wongchaisuwat P, Thamphithak R, Jitpukdee P, Wongchaisuwat N. Application of Deep Learning for Automated Detection of Polypoidal Choroidal Vasculopathy in Spectral Domain Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:16. [PMID: 36219163 PMCID: PMC9580222 DOI: 10.1167/tvst.11.10.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 08/29/2022] [Indexed: 11/25/2022] Open
Abstract
Objective To develop an automated polypoidal choroidal vasculopathy (PCV) screening model to distinguish PCV from wet age-related macular degeneration (wet AMD). Methods A retrospective review of spectral domain optical coherence tomography (SD-OCT) images was undertaken. The included SD-OCT images were classified into two distinct categories (PCV or wet AMD) prior to the development of the PCV screening model. The automated detection of PCV using the developed model was compared with the results of gold-standard fundus fluorescein angiography and indocyanine green (FFA + ICG) angiography. A framework of SHapley Additive exPlanations was used to interpret the results from the model. Results A total of 2334 SD-OCT images were enrolled for training purposes, and an additional 1171 SD-OCT images were used for external validation. The ResNet attention model yielded superior performance with average area under the curve values of 0.8 and 0.81 for the training and external validation data sets, respectively. The sensitivity/specificity calculated at a patient level was 100%/60% and 85%/71% for the training and external validation data sets, respectively. Conclusions A conventional FFA + ICG investigation to differentiate PCV from wet AMD requires intense health care resources and adversely affects patients. A deep learning algorithm is proposed to automatically distinguish PCV from wet AMD. The developed algorithm exhibited promising performance for further development into an alternative PCV screening tool. Enhancement of the model's performance with additional data is needed prior to implementation of this diagnostic tool in real-world clinical practice. The invisibility of disease signs within SD-OCT images is the main limitation of the proposed model. Translational Relevance Basic research of deep learning algorithms was applied to differentiate PCV from wet AMD based on OCT images, benefiting a diagnosis process and minimizing a risk of ICG angiogram.
Collapse
Affiliation(s)
- Papis Wongchaisuwat
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Ranida Thamphithak
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Peerakarn Jitpukdee
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Nida Wongchaisuwat
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
21
|
Huang X, Sun J, Gupta K, Montesano G, Crabb DP, Garway-Heath DF, Brusini P, Lanzetta P, Oddone F, Turpin A, McKendrick AM, Johnson CA, Yousefi S. Detecting glaucoma from multi-modal data using probabilistic deep learning. Front Med (Lausanne) 2022; 9:923096. [PMID: 36250081 PMCID: PMC9556968 DOI: 10.3389/fmed.2022.923096] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 08/10/2022] [Indexed: 11/13/2022] Open
Abstract
Objective To assess the accuracy of probabilistic deep learning models to discriminate normal eyes and eyes with glaucoma from fundus photographs and visual fields. Design Algorithm development for discriminating normal and glaucoma eyes using data from multicenter, cross-sectional, case-control study. Subjects and participants Fundus photograph and visual field data from 1,655 eyes of 929 normal and glaucoma subjects to develop and test deep learning models and an independent group of 196 eyes of 98 normal and glaucoma patients to validate deep learning models. Main outcome measures Accuracy and area under the receiver-operating characteristic curve (AUC). Methods Fundus photographs and OCT images were carefully examined by clinicians to identify glaucomatous optic neuropathy (GON). When GON was detected by the reader, the finding was further evaluated by another clinician. Three probabilistic deep convolutional neural network (CNN) models were developed using 1,655 fundus photographs, 1,655 visual fields, and 1,655 pairs of fundus photographs and visual fields collected from Compass instruments. Deep learning models were trained and tested using 80% of fundus photographs and visual fields for training set and 20% of the data for testing set. Models were further validated using an independent validation dataset. The performance of the probabilistic deep learning model was compared with that of the corresponding deterministic CNN model. Results The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and combined modalities using development dataset were 0.90 (95% confidence interval: 0.89-0.92), 0.89 (0.88-0.91), and 0.94 (0.92-0.96), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using the independent validation dataset were 0.94 (0.92-0.95), 0.98 (0.98-0.99), and 0.98 (0.98-0.99), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using an early glaucoma subset were 0.90 (0.88,0.91), 0.74 (0.73,0.75), 0.91 (0.89,0.93), respectively. Eyes that were misclassified had significantly higher uncertainty in likelihood of diagnosis compared to eyes that were classified correctly. The uncertainty level of the correctly classified eyes is much lower in the combined model compared to the model based on visual fields only. The AUCs of the deterministic CNN model using fundus images, visual field, and combined modalities based on the development dataset were 0.87 (0.85,0.90), 0.88 (0.84,0.91), and 0.91 (0.89,0.94), and the AUCs based on the independent validation dataset were 0.91 (0.89,0.93), 0.97 (0.95,0.99), and 0.97 (0.96,0.99), respectively, while the AUCs based on an early glaucoma subset were 0.88 (0.86,0.91), 0.75 (0.73,0.77), and 0.92 (0.89,0.95), respectively. Conclusion and relevance Probabilistic deep learning models can detect glaucoma from multi-modal data with high accuracy. Our findings suggest that models based on combined visual field and fundus photograph modalities detects glaucoma with higher accuracy. While probabilistic and deterministic CNN models provided similar performance, probabilistic models generate certainty level of the outcome thus providing another level of confidence in decision making.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
| | - Jian Sun
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
- German Center for Neurodegenerative Diseases (DZNE), Tübingen, Germany
| | - Krati Gupta
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
| | - Giovanni Montesano
- ASST Santi Paolo e Carlo, University of Milan, Milan, Italy
- Department of Optometry and Visual Sciences, City University of London, London, United Kingdom
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - David P. Crabb
- Department of Optometry and Visual Sciences, City University of London, London, United Kingdom
| | - David F. Garway-Heath
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Paolo Brusini
- Department of Ophthalmology, “Città di Udine” Health Center, Udine, Italy
| | - Paolo Lanzetta
- Ophthalmology Unit, Department of Medical and Biological Sciences, University of Udine, Udine, Italy
| | | | - Andrew Turpin
- School of Computing and Information System, University of Melbourne, Melbourne, VIC, Australia
| | - Allison M. McKendrick
- Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Chris A. Johnson
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
22
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
23
|
Chen JS, Baxter SL. Applications of natural language processing in ophthalmology: present and future. Front Med (Lausanne) 2022; 9:906554. [PMID: 36004369 PMCID: PMC9393550 DOI: 10.3389/fmed.2022.906554] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 05/31/2022] [Indexed: 11/13/2022] Open
Abstract
Advances in technology, including novel ophthalmic imaging devices and adoption of the electronic health record (EHR), have resulted in significantly increased data available for both clinical use and research in ophthalmology. While artificial intelligence (AI) algorithms have the potential to utilize these data to transform clinical care, current applications of AI in ophthalmology have focused mostly on image-based deep learning. Unstructured free-text in the EHR represents a tremendous amount of underutilized data in big data analyses and predictive AI. Natural language processing (NLP) is a type of AI involved in processing human language that can be used to develop automated algorithms using these vast quantities of available text data. The purpose of this review was to introduce ophthalmologists to NLP by (1) reviewing current applications of NLP in ophthalmology and (2) exploring potential applications of NLP. We reviewed current literature published in Pubmed and Google Scholar for articles related to NLP and ophthalmology, and used ancestor search to expand our references. Overall, we found 19 published studies of NLP in ophthalmology. The majority of these publications (16) focused on extracting specific text such as visual acuity from free-text notes for the purposes of quantitative analysis. Other applications included: domain embedding, predictive modeling, and topic modeling. Future ophthalmic applications of NLP may also focus on developing search engines for data within free-text notes, cleaning notes, automated question-answering, and translating ophthalmology notes for other specialties or for patients, especially with a growing interest in open notes. As medicine becomes more data-oriented, NLP offers increasing opportunities to augment our ability to harness free-text data and drive innovations in healthcare delivery and treatment of ophthalmic conditions.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
24
|
Lee J, Seo W, Park J, Lim WS, Oh JY, Moon NJ, Lee JK. Neural network-based method for diagnosis and severity assessment of Graves' orbitopathy using orbital computed tomography. Sci Rep 2022; 12:12071. [PMID: 35840769 PMCID: PMC9287334 DOI: 10.1038/s41598-022-16217-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 07/06/2022] [Indexed: 11/09/2022] Open
Abstract
Computed tomography (CT) has been widely used to diagnose Graves' orbitopathy, and the utility is gradually increasing. To develop a neural network (NN)-based method for diagnosis and severity assessment of Graves' orbitopathy (GO) using orbital CT, a specific type of NN optimized for diagnosing GO was developed and trained using 288 orbital CT scans obtained from patients with mild and moderate-to-severe GO and normal controls. The developed NN was compared with three conventional NNs [GoogleNet Inception v1 (GoogLeNet), 50-layer Deep Residual Learning (ResNet-50), and 16-layer Very Deep Convolutional Network from Visual Geometry group (VGG-16)]. The diagnostic performance was also compared with that of three oculoplastic specialists. The developed NN had an area under receiver operating curve (AUC) of 0.979 for diagnosing patients with moderate-to-severe GO. Receiver operating curve (ROC) analysis yielded AUCs of 0.827 for GoogLeNet, 0.611 for ResNet-50, 0.540 for VGG-16, and 0.975 for the oculoplastic specialists for diagnosing moderate-to-severe GO. For the diagnosis of mild GO, the developed NN yielded an AUC of 0.895, which is better than the performances of the other NNs and oculoplastic specialists. This study may contribute to NN-based interpretation of orbital CTs for diagnosing various orbital diseases.
Collapse
Affiliation(s)
- Jaesung Lee
- School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
| | - Wangduk Seo
- School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
| | - Jaegyun Park
- School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
| | - Won-Seon Lim
- School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
| | - Ja Young Oh
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-ro, Dongjak-gu, Seoul, 06973, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-ro, Dongjak-gu, Seoul, 06973, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-ro, Dongjak-gu, Seoul, 06973, Korea.
| |
Collapse
|
25
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
26
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
27
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
28
|
García-Layana A, López-Gálvez M, García-Arumí J, Arias L, Gea-Sánchez A, Marín-Méndez JJ, Sayar-Beristain O, Sedano-Gil G, Aslam TM, Minnella AM, Ibáñez IL, de Dios Hernández JM, Seddon JM. A Screening Tool for Self-Evaluation of Risk for Age-Related Macular Degeneration: Validation in a Spanish Population. Transl Vis Sci Technol 2022; 11:23. [PMID: 35749108 PMCID: PMC9234358 DOI: 10.1167/tvst.11.6.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Purpose The objectives of this study were the creation and validation of a screening tool for age-related macular degeneration (AMD) for routine assessment by primary care physicians, ophthalmologists, other healthcare professionals, and the general population. Methods A simple, self-administered questionnaire (Simplified Théa AMD Risk-Assessment Scale [STARS] version 4.0) which included well-established risk factors for AMD, such as family history, smoking, and dietary factors, was administered to patients during ophthalmology visits. A fundus examination was performed to determine presence of large soft drusen, pigmentary abnormalities, or late AMD. Based on data from the questionnaire and the clinical examination, predictive models were developed to estimate probability of the Age-Related Eye Disease Study (AREDS) score (categorized as low risk/high risk). The models were evaluated by area under the receiving operating characteristic curve analysis. Results A total of 3854 subjects completed the questionnaire and underwent a fundus examination. Early/intermediate and late AMD were detected in 15.9% and 23.8% of the patients, respectively. A predictive model was developed with training, validation, and test datasets. The model in the test set had an area under the curve of 0.745 (95% confidence interval [CI] = 0.705-0.784), a positive predictive value of 0.500 (95% CI = 0.449-0.557), and a negative predictive value of 0.810 (95% CI = 0.770-0.844). Conclusions The STARS questionnaire version 4.0 and the model identify patients at high risk of developing late AMD. Translational Relevance The screening instrument described could be useful to evaluate the risk of late AMD in patients >55 years without having an eye examination, which could lead to more timely referrals and encourage lifestyle changes.
Collapse
Affiliation(s)
- Alfredo García-Layana
- Retinal Pathologies and New Therapies Group, Experimental Ophthalmology Laboratory, Department of Ophthalmology, Clínica Universidad de Navarra, Pamplona, Spain,Navarra Institute for Health Research, IdiSNA, Pamplona, Spain,Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain
| | - Maribel López-Gálvez
- Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain,Retina Group, IOBA, Campus Miguel Delibes, Valladolid, Spain,Grupo de Ingeniería Biomédica, Universidad de Valladolid, Campus Miguel Delibes. Valladolid, Spain,Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - José García-Arumí
- Department of Ophthalmology, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Luis Arias
- Department of Ophthalmology, Bellvitge University Hospital, University of Barcelona, Barcelona, Spain
| | - Alfredo Gea-Sánchez
- Preventive Medicine and Public Health, School of Medicine, University of Navarra, Pamplona, Spain
| | | | | | | | - Tariq M. Aslam
- School of Pharmacy and Optometry, University of Manchester and Manchester Royal Eye Hospital, Manchester, UK
| | - Angelo M. Minnella
- UOC Oculistica, Università Cattolica del S. Cuore, Fondazione Policlinico Universitario A. Gemelli-IRCCS, Rome, Italy
| | - Isabel López Ibáñez
- Department of Family and Community Medicine, Centro de Salud Nápoles y Sicilia, Valencia, Spain
| | | | - Johanna M. Seddon
- Department of Ophthalmology and Visual Sciences, University of Massachusetts Medical School, Worcester, Massachusetts, USA
| |
Collapse
|
29
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
30
|
Lim JS, Hong M, Lam WST, Zhang Z, Teo ZL, Liu Y, Ng WY, Foo LL, Ting DSW. Novel technical and privacy-preserving technology for artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2022; 33:174-187. [PMID: 35266894 DOI: 10.1097/icu.0000000000000846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each. RECENT FINDINGS Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks. SUMMARY AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Collapse
Affiliation(s)
- Jane S Lim
- Singapore National Eye Centre, Singapore Eye Research Institute
| | | | - Walter S T Lam
- Yong Loo Lin School of Medicine, National University of Singapore
| | - Zheting Zhang
- Lee Kong Chian School of Medicine, Nanyang Technological University
| | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Yong Liu
- National University of Singapore, DukeNUS Medical School, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
| |
Collapse
|
31
|
Sedova A, Hajdu D, Datlinger F, Steiner I, Neschi M, Aschauer J, Gerendas BS, Schmidt-Erfurth U, Pollreisz A. Comparison of early diabetic retinopathy staging in asymptomatic patients between autonomous AI-based screening and human-graded ultra-widefield colour fundus images. Eye (Lond) 2022; 36:510-516. [PMID: 35132211 PMCID: PMC8873196 DOI: 10.1038/s41433-021-01912-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 11/02/2021] [Accepted: 12/16/2021] [Indexed: 02/01/2023] Open
Abstract
INTRODUCTION Comparison of diabetic retinopathy (DR) severity between autonomous Artificial Intelligence (AI)-based outputs from an FDA-approved screening system and human retina specialists' gradings from ultra-widefield (UWF) colour images. METHODS Asymptomatic diabetics without a previous diagnosis of DR were included in this prospective observational pilot study. Patients were imaged with autonomous AI (IDx-DR, Digital Diagnostics). For each eye, two 45° colour fundus images were analysed by a secure server-based AI algorithm. UWF colour fundus imaging was performed using Optomap (Daytona, Optos). The International Clinical DR severity score was assessed both on a 7-field area projection (7F-mask) according to the early treatment diabetic retinopathy study (ETDRS) and on the total gradable area (UWF full-field) up to the far periphery on UWF images. RESULTS Of 54 patients included (n = 107 eyes), 32 were type 2 diabetics (11 females). Mean BCVA was 0.99 ± 0.25. Autonomous AI diagnosed 16 patients as negative, 28 for moderate DR and 10 for having a vision-threatening disease (severe DR, proliferative DR, diabetic macular oedema). Based on the 7F-mask grading with the eye with the worse grading defining the DR stage 23 patients were negative for DR, 11 showed mild, 19 moderate and 1 severe DR. When UWF full-field was analysed, 20 patients were negative for DR, while the number of mild, moderate and severe DR patients were 12, 21, and 1, respectively. CONCLUSIONS The autonomous AI-based DR examination demonstrates sufficient accuracy in diagnosing asymptomatic non-proliferative diabetic patients with referable DR even compared to UWF imaging evaluated by human experts offering a suitable method for DR screening.
Collapse
Affiliation(s)
- Aleksandra Sedova
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Dorottya Hajdu
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Felix Datlinger
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Irene Steiner
- Center for Medical Statistics, Informatics and Intelligent Systems, Section for Medical Statistics, Medical University of Vienna, Vienna, Austria
| | - Martina Neschi
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Aschauer
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Bianca S Gerendas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Andreas Pollreisz
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
32
|
Ganjdanesh A, Zhang J, Chew EY, Ding Y, Huang H, Chen W. LONGL-Net: temporal correlation structure guided deep learning model to predict longitudinal age-related macular degeneration severity. PNAS NEXUS 2022; 1:pgab003. [PMID: 35360552 PMCID: PMC8962776 DOI: 10.1093/pnasnexus/pgab003] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/15/2021] [Indexed: 01/28/2023]
Abstract
Age-related macular degeneration (AMD) is the principal cause of blindness in developed countries, and its prevalence will increase to 288 million people in 2040. Therefore, automated grading and prediction methods can be highly beneficial for recognizing susceptible subjects to late-AMD and enabling clinicians to start preventive actions for them. Clinically, AMD severity is quantified by Color Fundus Photographs (CFP) of the retina, and many machine-learning-based methods are proposed for grading AMD severity. However, few models were developed to predict the longitudinal progression status, i.e. predicting future late-AMD risk based on the current CFP, which is more clinically interesting. In this paper, we propose a new deep-learning-based classification model (LONGL-Net) that can simultaneously grade the current CFP and predict the longitudinal outcome, i.e. whether the subject will be in late-AMD in the future time-point. We design a new temporal-correlation-structure-guided Generative Adversarial Network model that learns the interrelations of temporal changes in CFPs in consecutive time-points and provides interpretability for the classifier's decisions by forecasting AMD symptoms in the future CFPs. We used about 30,000 CFP images from 4,628 participants in the Age-Related Eye Disease Study. Our classifier showed average 0.905 (95% CI: 0.886-0.922) AUC and 0.762 (95% CI: 0.733-0.792) accuracy on the 3-class classification problem of simultaneously grading current time-point's AMD condition and predicting late AMD progression of subjects in the future time-point. We further validated our model on the UK Biobank dataset, where our model showed average 0.905 accuracy and 0.797 sensitivity in grading 300 CFP images.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Jipeng Zhang
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ying Ding
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Wei Chen
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Division of Pulmonary Medicine, Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, University of Pittsburgh, Pittsburgh, PA 15219, USA
| |
Collapse
|
33
|
Kumar H, Goh KL, Guymer RH, Wu Z. A clinical perspective on the expanding role of artificial intelligence in age-related macular degeneration. Clin Exp Optom 2022; 105:674-679. [PMID: 35073498 DOI: 10.1080/08164622.2021.2022961] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
In recent years, there has been intense development of artificial intelligence (AI) techniques, which have the potential to improve the clinical management of age-related macular degeneration (AMD) and facilitate the prevention of irreversible vision loss from this condition. Such AI techniques could be used as clinical decision support tools to: (i) improve the detection of AMD by community eye health practitioners, (ii) enhance risk stratification to enable personalised monitoring strategies for those with the early stages of AMD, and (iii) enable early detection of signs indicative of possible choroidal neovascularisation allowing triaging of patients requiring urgent review. This review discusses the latest developments in AI techniques that show promise for these tasks, as well as how they may help in the management of patients being treated for choroidal neovascularisation and in accelerating the discovery of new treatments in AMD.
Collapse
Affiliation(s)
- Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| |
Collapse
|
34
|
Yuan TH, Yue ZS, Zhang GH, Wang L, Dou GR. Beyond the Liver: Liver-Eye Communication in Clinical and Experimental Aspects. Front Mol Biosci 2022; 8:823277. [PMID: 35004861 PMCID: PMC8740136 DOI: 10.3389/fmolb.2021.823277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 12/09/2021] [Indexed: 12/04/2022] Open
Abstract
The communication between organs participates in the regulation of body homeostasis under physiological conditions and the progression and adaptation of diseases under pathological conditions. The communication between the liver and the eyes has been received more and more attention. In this review, we summarized some molecular mediators that can reflect the relationship between the liver and the eye, and then extended the metabolic relationship between the liver and the eye. We also summarized some typical diseases and phenotypes that have been able to reflect the liver-eye connection in the clinic, especially non-alcoholic fatty liver disease (NAFLD) and diabetic retinopathy (DR). The close connection between the liver and the eye is reflected through multiple pathways such as metabolism, oxidative stress, and inflammation. In addition, we presented the connection between the liver and the eye in traditional Chinese medicine, and introduced the fact that artificial intelligence may use the close connection between the liver and the eye to help us solve some practical clinical problems. Paying attention to liver-eye communication will help us have a deeper and more comprehensive understanding of certain communication between liver diseases and eyes, and provide new ideas for their potential therapeutic strategy.
Collapse
Affiliation(s)
- Tian-Hao Yuan
- Department of Ophthalmology, Eye Institute of Chinese PLA, Xijing Hospital, Fourth Military Medical University, Xi'an, China.,Department of The Cadet Team 6 of School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Zhen-Sheng Yue
- Department of Ophthalmology, Eye Institute of Chinese PLA, Xijing Hospital, Fourth Military Medical University, Xi'an, China.,Department of Hepatobiliary Surgery, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Guo-Heng Zhang
- Department of Ophthalmology, Eye Institute of Chinese PLA, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Lin Wang
- Department of Hepatobiliary Surgery, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Guo-Rui Dou
- Department of Ophthalmology, Eye Institute of Chinese PLA, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
35
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
Chen JS, Coyner AS, Chan RP, Hartnett ME, Moshfeghi DM, Owen LA, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deepfakes in Ophthalmology. OPHTHALMOLOGY SCIENCE 2021; 1:100079. [PMID: 36246951 PMCID: PMC9562356 DOI: 10.1016/j.xops.2021.100079] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 10/01/2021] [Accepted: 10/29/2021] [Indexed: 02/06/2023]
Abstract
Purpose Generative adversarial networks (GANs) are deep learning (DL) models that can create and modify realistic-appearing synthetic images, or deepfakes, from real images. The purpose of our study was to evaluate the ability of experts to discern synthesized retinal fundus images from real fundus images and to review the current uses and limitations of GANs in ophthalmology. Design Development and expert evaluation of a GAN and an informal review of the literature. Participants A total of 4282 image pairs of fundus images and retinal vessel maps acquired from a multicenter ROP screening program. Methods Pix2Pix HD, a high-resolution GAN, was first trained and validated on fundus and vessel map image pairs and subsequently used to generate 880 images from a held-out test set. Fifty synthetic images from this test set and 50 different real images were presented to 4 expert ROP ophthalmologists using a custom online system for evaluation of whether the images were real or synthetic. Literature was reviewed on PubMed and Google Scholars using combinations of the terms ophthalmology, GANs, generative adversarial networks, ophthalmology, images, deepfakes, and synthetic. Ancestor search was performed to broaden results. Main Outcome Measures Expert ability to discern real versus synthetic images was evaluated using percent accuracy. Statistical significance was evaluated using a Fisher exact test, with P values ≤ 0.05 thresholded for significance. Results The expert majority correctly identified 59% of images as being real or synthetic (P = 0.1). Experts 1 to 4 correctly identified 54%, 58%, 49%, and 61% of images (P = 0.505, 0.158, 1.000, and 0.043, respectively). These results suggest that the majority of experts could not discern between real and synthetic images. Additionally, we identified 20 implementations of GANs in the ophthalmology literature, with applications in a variety of imaging modalities and ophthalmic diseases. Conclusions Generative adversarial networks can create synthetic fundus images that are indiscernible from real fundus images by expert ROP ophthalmologists. Synthetic images may improve dataset augmentation for DL, may be used in trainee education, and may have implications for patient privacy.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - M. Elizabeth Hartnett
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Leah A. Owen
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
- Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
- Correspondence: J. Peter Campbell, MD, MPH, Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239.
| |
Collapse
|
37
|
Feehan M, Owen LA, McKinnon IM, DeAngelis MM. Artificial Intelligence, Heuristic Biases, and the Optimization of Health Outcomes: Cautionary Optimism. J Clin Med 2021; 10:5284. [PMID: 34830566 PMCID: PMC8620813 DOI: 10.3390/jcm10225284] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 11/03/2021] [Accepted: 11/09/2021] [Indexed: 01/31/2023] Open
Abstract
The use of artificial intelligence (AI) and machine learning (ML) in clinical care offers great promise to improve patient health outcomes and reduce health inequity across patient populations. However, inherent biases in these applications, and the subsequent potential risk of harm can limit current use. Multi-modal workflows designed to minimize these limitations in the development, implementation, and evaluation of ML systems in real-world settings are needed to improve efficacy while reducing bias and the risk of potential harms. Comprehensive consideration of rapidly evolving AI technologies and the inherent risks of bias, the expanding volume and nature of data sources, and the evolving regulatory landscapes, can contribute meaningfully to the development of AI-enhanced clinical decision making and the reduction in health inequity.
Collapse
Affiliation(s)
- Michael Feehan
- Cerner Enviza, Kansas City, MO 64117, USA;
- Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA;
- Department of Ophthalmology, Ross Eye Institute, Jacobs School of Medicine and Biomedical Sciences, State University of New York at Buffalo, Buffalo, NY 14203, USA
| | - Leah A. Owen
- Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA;
- Department of Ophthalmology, Ross Eye Institute, Jacobs School of Medicine and Biomedical Sciences, State University of New York at Buffalo, Buffalo, NY 14203, USA
- Department of Ophthalmology and Visual Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA
- Department of Obstetrics and Gynecology, University of Utah School of Medicine, Salt Lake City, UT 84132, USA
| | | | - Margaret M. DeAngelis
- Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA;
- Department of Ophthalmology, Ross Eye Institute, Jacobs School of Medicine and Biomedical Sciences, State University of New York at Buffalo, Buffalo, NY 14203, USA
- Department of Ophthalmology and Visual Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA
- Genetics, Genomics and Bioinformatics Graduate Program and Neuroscience Graduate Program, Jacobs, School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, USA
- Veterans Administration Western New York Healthcare System, Buffalo, NY 14212, USA
| |
Collapse
|
38
|
Wassan JT, Zheng H, Wang H. Role of Deep Learning in Predicting Aging-Related Diseases: A Scoping Review. Cells 2021; 10:cells10112924. [PMID: 34831148 PMCID: PMC8616301 DOI: 10.3390/cells10112924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 10/22/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022] Open
Abstract
Aging refers to progressive physiological changes in a cell, an organ, or the whole body of an individual, over time. Aging-related diseases are highly prevalent and could impact an individual’s physical health. Recently, artificial intelligence (AI) methods have been used to predict aging-related diseases and issues, aiding clinical providers in decision-making based on patient’s medical records. Deep learning (DL), as one of the most recent generations of AI technologies, has embraced rapid progress in the early prediction and classification of aging-related issues. In this paper, a scoping review of publications using DL approaches to predict common aging-related diseases (such as age-related macular degeneration, cardiovascular and respiratory diseases, arthritis, Alzheimer’s and lifestyle patterns related to disease progression), was performed. Google Scholar, IEEE and PubMed are used to search DL papers on common aging-related issues published between January 2017 and August 2021. These papers were reviewed, evaluated, and the findings were summarized. Overall, 34 studies met the inclusion criteria. These studies indicate that DL could help clinicians in diagnosing disease at its early stages by mapping diagnostic predictions into observable clinical presentations; and achieving high predictive performance (e.g., more than 90% accurate predictions of diseases in aging).
Collapse
Affiliation(s)
| | - Huiru Zheng
- School of Computing, Ulster University, Belfast BT15 1ED, UK;
- Correspondence:
| | - Haiying Wang
- School of Computing, Ulster University, Belfast BT15 1ED, UK;
| |
Collapse
|
39
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
40
|
Chen JS, Coyner AS, Ostmo S, Sonmez K, Bajimaya S, Pradhan E, Valikodath N, Cole ED, Al-Khaled T, Chan RVP, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras. Ophthalmol Retina 2021; 5:1027-1035. [PMID: 33561545 PMCID: PMC8364291 DOI: 10.1016/j.oret.2020.12.013] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/02/2020] [Accepted: 12/16/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. DESIGN Diagnostic validation study of CNN for stage detection. PARTICIPANTS Retinal fundus images obtained from preterm infants during routine ROP screenings. METHODS Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. MAIN OUTCOME MEASURES Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. RESULTS Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. CONCLUSIONS A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
Collapse
Affiliation(s)
- Jimmy S Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Kemal Sonmez
- Cancer Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon
| | | | - Eli Pradhan
- Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
41
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
42
|
Romond K, Alam M, Kravets S, Sisternes LD, Leng T, Lim JI, Rubin D, Hallak JA. Imaging and artificial intelligence for progression of age-related macular degeneration. Exp Biol Med (Maywood) 2021; 246:2159-2169. [PMID: 34404252 DOI: 10.1177/15353702211031547] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Age-related macular degeneration (AMD) is a leading cause of severe vision loss. With our aging population, it may affect 288 million people globally by the year 2040. AMD progresses from an early and intermediate dry form to an advanced one, which manifests as choroidal neovascularization and geographic atrophy. Conversion to AMD-related exudation is known as progression to neovascular AMD, and presence of geographic atrophy is known as progression to advanced dry AMD. AMD progression predictions could enable timely monitoring, earlier detection and treatment, improving vision outcomes. Machine learning approaches, a subset of artificial intelligence applications, applied on imaging data are showing promising results in predicting progression. Extracted biomarkers, specifically from optical coherence tomography scans, are informative in predicting progression events. The purpose of this mini review is to provide an overview about current machine learning applications in artificial intelligence for predicting AMD progression, and describe the various methods, data-input types, and imaging modalities used to identify high-risk patients. With advances in computational capabilities, artificial intelligence applications are likely to transform patient care and management in AMD. External validation studies that improve generalizability to populations and devices, as well as evaluating systems in real-world clinical settings are needed to improve the clinical translations of artificial intelligence AMD applications.
Collapse
Affiliation(s)
- Kathleen Romond
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Minhaj Alam
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Sasha Kravets
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA.,Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, Chicago, IL 60612, USA
| | | | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Joelle A Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
43
|
Ruamviboonsuk P, Chantra S, Seresirikachorn K, Ruamviboonsuk V, Sangroongruangsri S. Economic Evaluations of Artificial Intelligence in Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:307-316. [PMID: 34261102 DOI: 10.1097/apo.0000000000000403] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
ABSTRACT Artificial intelligence (AI) is expected to cause significant medical quality enhancements and cost-saving improvements in ophthalmology. Although there has been a rapid growth of studies on AI in the recent years, real-world adoption of AI is still rare. One reason may be because the data derived from economic evaluations of AI in health care, which policy makers used for adopting new technology, have been fragmented and scarce. Most data on economics of AI in ophthalmology are from diabetic retinopathy (DR) screening. Few studies classified costs of AI software, which has been considered as a medical device, into direct medical costs. These costs of AI are composed of initial and maintenance costs. The initial costs may include investment in research and development, and costs for validation of different datasets. Meanwhile, the maintenance costs include costs for algorithms upgrade and hardware maintenance in the long run. The cost of AI should be balanced between manufacturing price and reimbursements since it may pose significant challenges and barriers to providers. Evidence from cost-effectiveness analyses showed that AI, either standalone or used with humans, was more cost-effective than manual DR screening. Notably, economic evaluation of AI for DR screening can be used as a model for AI to other ophthalmic diseases.
Collapse
Affiliation(s)
- Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Somporn Chantra
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Kasem Seresirikachorn
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Varis Ruamviboonsuk
- Department of Biochemistry, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Sermsiri Sangroongruangsri
- Social and Administrative Pharmacy Division, Department of Pharmacy, Faculty of Pharmacy, Mahidol University, Bangkok, Thailand
| |
Collapse
|
44
|
Lin AC, Lee CS, Blazes M, Lee AY, Gorin MB. Assessing the Clinical Utility of Expanded Macular OCTs Using Machine Learning. Transl Vis Sci Technol 2021; 10:32. [PMID: 34038502 PMCID: PMC8161701 DOI: 10.1167/tvst.10.6.32] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose Optical coherence tomography (OCT) is widely used in the management of retinal pathologies, including age-related macular degeneration (AMD), diabetic macular edema (DME), and primary open-angle glaucoma (POAG). We used machine learning techniques to understand diagnostic performance gains from expanding macular OCT B-scans compared with foveal-only OCT B-scans for these conditions. Methods Electronic medical records were extracted to obtain 61 B-scans per eye from patients with AMD, diabetic retinopathy, or POAG. We constructed deep neural networks and random forest ensembles and generated area under the receiver operating characteristic (AUROC) and area under the precision recall (AUPR) curves. Results After extracting 630,000 OCT images, we achieved improved AUROC and AUPR curves when comparing the central image (one B-scan) to all images (61 B-scans). The AUROC and AUPR points of diminishing return for diagnostic accuracy for macular OCT coverage were found to be within 2.75 to 4.00 mm (14–19 B-scans), 4.25 to 4.50 mm (20–21 B-scans), and 4.50 to 6.25 mm (21–28 B-scans) for AMD, DME, and POAG, respectively. All models with >0.25 mm of coverage had statistically significantly improved AUROC/AUPR curves for all diseases (P < 0.05). Conclusions Systematically expanded macular coverage models demonstrated significant differences in total macular coverage required for improved diagnostic accuracy, with the largest macular area being relevant in POAG followed by DME and then AMD. These findings support our hypothesis that the extent of macular coverage by OCT imaging in the clinical setting, for any of the three major disorders, has a measurable impact on the functionality of artificial intelligence decision support. Translational Relevance We used machine learning techniques to improve OCT imaging standards for common retinal disease diagnoses.
Collapse
Affiliation(s)
- Andrew C Lin
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA.,Department of Ophthalmology, New York University, New York, NY, USA
| | - Cecilia S Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Marian Blazes
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Michael B Gorin
- Department of Ophthalmology, University of California, Los Angeles, CA, USA
| |
Collapse
|
45
|
de Figueiredo LA, Dias JVP, Polati M, Carricondo PC, Debert I. Strabismus and Artificial Intelligence App: Optimizing Diagnostic and Accuracy. Transl Vis Sci Technol 2021; 10:22. [PMID: 34137838 PMCID: PMC8212438 DOI: 10.1167/tvst.10.7.22] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Clinical evaluation of eye versions plays an important role in the diagnosis of special strabismus. Despite the importance of versions, they are not standardized in clinical practice because they are subjective. Assuming that objectivity confers accuracy, this research aims to create an artificial intelligence app that can classify the eye versions into nine positions of gaze. Methods We analyzed photos of 110 strabismus patients from an outpatient clinic of a tertiary hospital at nine gazes. For each photo, the gaze was identified, and the corresponding version was rated by the same examiner during patient evaluation. Results The images were standardized by using the OpenCV library in Python language, so that the patient's eyes were located and sent to a multilabel model through the Keras framework regardless of the photo orientation. Then, the model was trained for each combination of the following groupings: eyes (left, right), gaze (1 to 9), and version (-4 to 4). Resnet50 was used as the neural network architecture, and the Data Augmentation technique was applied. For quick inference via web browser, the SteamLit app framework was employed. For use in Mobiles, the finished model was exported for use in through the Tensorflow Lite converter. Conclusions The results showed that the mobile app might be applied to complement evaluation of ocular motility based on objective classification of ocular versions. However, further exploratory research and validations are required. Translational Relevance Apart from the traditional clinical practice method, professionals will be able to envisage an easy-to-apply support app, to increase diagnostic accuracy.
Collapse
Affiliation(s)
| | | | - Mariza Polati
- Department of Strabismus, Hospital das Clínicas, University of Sao Paulo, Brazil
| | | | - Iara Debert
- Department of Strabismus, Hospital das Clínicas, University of Sao Paulo, Brazil
| |
Collapse
|
46
|
Bhuiyan A, Govindaiah A, Smith RT. An Artificial-Intelligence- and Telemedicine-Based Screening Tool to Identify Glaucoma Suspects from Color Fundus Imaging. J Ophthalmol 2021; 2021:6694784. [PMID: 34136281 PMCID: PMC8179760 DOI: 10.1155/2021/6694784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 05/11/2021] [Indexed: 10/26/2022] Open
Abstract
RESULTS The system achieved an accuracy of 89.67% (sensitivity, 83.33%; specificity, 93.89%; and AUC, 0.93). For external validation, the Retinal Fundus Image Database for Glaucoma Analysis dataset, which has 638 gradable quality images, was used. Here, the model achieved an accuracy of 83.54% (sensitivity, 80.11%; specificity, 84.96%; and AUC, 0.85). CONCLUSIONS Having demonstrated an accurate and fully automated glaucoma-suspect screening system that can be deployed on telemedicine platforms, we plan prospective trials to determine the feasibility of the system in primary-care settings.
Collapse
Affiliation(s)
- Alauddin Bhuiyan
- iHealthscreen Inc., New York, NY, USA
- New York Eye and Ear Infirmary, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - R. Theodore Smith
- New York Eye and Ear Infirmary, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
47
|
Next-Generation Sequencing Applications for Inherited Retinal Diseases. Int J Mol Sci 2021; 22:ijms22115684. [PMID: 34073611 PMCID: PMC8198572 DOI: 10.3390/ijms22115684] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 05/21/2021] [Accepted: 05/22/2021] [Indexed: 12/12/2022] Open
Abstract
Inherited retinal diseases (IRDs) represent a collection of phenotypically and genetically diverse conditions. IRDs phenotype(s) can be isolated to the eye or can involve multiple tissues. These conditions are associated with diverse forms of inheritance, and variants within the same gene often can be associated with multiple distinct phenotypes. Such aspects of the IRDs highlight the difficulty met when establishing a genetic diagnosis in patients. Here we provide an overview of cutting-edge next-generation sequencing techniques and strategies currently in use to maximise the effectivity of IRD gene screening. These techniques have helped researchers globally to find elusive causes of IRDs, including copy number variants, structural variants, new IRD genes and deep intronic variants, among others. Resolving a genetic diagnosis with thorough testing enables a more accurate diagnosis and more informed prognosis and should also provide information on inheritance patterns which may be of particular interest to patients of a child-bearing age. Given that IRDs are heritable conditions, genetic counselling may be offered to help inform family planning, carrier testing and prenatal screening. Additionally, a verified genetic diagnosis may enable access to appropriate clinical trials or approved medications that may be available for the condition.
Collapse
|
48
|
Cohen AB, Nahed BV. The Digital Neurologic Examination. Digit Biomark 2021; 5:114-126. [PMID: 34056521 DOI: 10.1159/000515577] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 03/01/2021] [Indexed: 11/19/2022] Open
Abstract
Digital health has been rapidly thrust into the forefront of care delivery. Poised to extend the clinician's reach, a new set of examination tools will redefine neurologic and neurosurgical care, serving as the basis for the digital neurologic examination. We describe its components and review specific technologies, which move beyond traditional video-based telemedicine encounters and include separate digital tools. A future suite of these clinical assessment technologies will blur the lines between history taking, examination, and remote monitoring. Prior to full-scale implementation, however, much more investigation is needed. Because of the nascent state of the technologies, researchers, clinicians, and developers should establish digital neurologic examination requirements in order to maximize its impact.
Collapse
Affiliation(s)
- Adam B Cohen
- Department of Neurology, The Johns Hopkins Hospital, Baltimore, Maryland, USA.,Health Technologies, Army Medical Response, National Health Mission Area, The Johns Hopkins University Applied Physics Lab, Laurel, Maryland, USA
| | - Brain V Nahed
- Department of Neurosurgery, The Massachusetts General Hospital, Boston, Massachusetts, USA
| |
Collapse
|
49
|
Rim TH, Lee AY, Ting DS, Teo KYC, Yang HS, Kim H, Lee G, Teo ZL, Teo Wei Jun A, Takahashi K, Yoo TK, Kim SE, Yanagi Y, Cheng CY, Kim SS, Wong TY, Cheung CMG. Computer-aided detection and abnormality score for the outer retinal layer in optical coherence tomography. Br J Ophthalmol 2021; 106:1301-1307. [PMID: 33875452 DOI: 10.1136/bjophthalmol-2020-317817] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 02/20/2021] [Accepted: 03/17/2021] [Indexed: 01/20/2023]
Abstract
BACKGROUND To develop computer-aided detection (CADe) of ORL abnormalities in the retinal pigmented epithelium, interdigitation zone and ellipsoid zone via optical coherence tomography (OCT). METHODS In this retrospective study, healthy participants with normal ORL, and patients with abnormality of ORL including choroidal neovascularisation (CNV) or retinitis pigmentosa (RP) were included. First, an automatic segmentation deep learning (DL) algorithm, CADe, was developed for the three outer retinal layers using 120 handcraft masks of ORL. This automatic segmentation algorithm generated 4000 segmentations, which included 2000 images with normal ORL and 2000 (1000 CNV and 1000 RP) images with focal or wide defects in ORL. Second, based on the automatically generated segmentation images, a binary classifier (normal vs abnormal) was developed. Results were evaluated by area under the receiver operating characteristic curve (AUC). RESULTS The DL algorithm achieved an AUC of 0.984 (95% CI 0.976 to 0.993) for individual image evaluation in the internal test set of 797 images. In addition, performance analysis of a publicly available external test set (n=968) had an AUC of 0.957 (95% CI 0.944 to 0.970) and a second clinical external test set (n=1124) had an AUC of 0.978 (95% CI 0.970 to 0.986). Moreover, the CADe highlighted well normal parts of ORL and omitted highlights in abnormal ORLs of CNV and RP. CONCLUSION The CADe can use OCT images to segment ORL and differentiate between normal ORL and abnormal ORL. The CADe classifier also performs visualisation and may aid future physician diagnosis and clinical applications.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Aaron Yuntai Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, Washington, USA
| | - Daniel S Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Kelvin Yi Chong Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Hee Seung Yang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | | | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Alvin Teo Wei Jun
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Kengo Takahashi
- Department of Ophthalmology, Asahikawa Medical University, Hokkaido, Japan
| | - Tea Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Seoul, Korea (the Republic of)
| | - Sung Eun Kim
- Department of Ophthalmology, CHA Bundang Medical Center, CHA University, Seongnam, South Korea
| | - Yasuo Yanagi
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Asahikawa Medical University, Hokkaido, Japan
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Chui Ming Gemmy Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore .,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
50
|
Gong D, Kras A, Miller JB. Application of Deep Learning for Diagnosing, Classifying, and Treating Age-Related Macular Degeneration. Semin Ophthalmol 2021; 36:198-204. [PMID: 33617390 DOI: 10.1080/08820538.2021.1889617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Age-related macular degeneration (AMD) affects nearly 200 million people and is the third leading cause of irreversible vision loss worldwide. Deep learning, a branch of artificial intelligence that can learn image recognition based on pre-existing datasets, creates an opportunity for more accurate and efficient diagnosis, classification, and treatment of AMD on both individual and population levels. Current algorithms based on fundus photography and optical coherence tomography imaging have already achieved diagnostic accuracy levels comparable to human graders. This accuracy can be further increased when deep learning algorithms are simultaneously applied to multiple diagnostic imaging modalities. Combined with advances in telemedicine and imaging technology, deep learning can enable large populations of patients to be screened than would otherwise be possible and allow ophthalmologists to focus on seeing those patients who are in need of treatment, thus reducing the number of patients with significant visual impairment from AMD.
Collapse
Affiliation(s)
- Dan Gong
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA
| | - Ashley Kras
- Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| | - John B Miller
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA.,Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| |
Collapse
|