1
|
Founti P, Stuart K, Nolan WP, Khawaja AP, Foster PJ. Screening Strategies and Methodologies. J Glaucoma 2024; 33:S15-S20. [PMID: 39149948 DOI: 10.1097/ijg.0000000000002426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 05/02/2024] [Indexed: 08/17/2024]
Abstract
PRCIS While glaucoma is a leading cause of irreversible vision loss, it presents technical challenges in the design and implementation of screening. New technologies such as PRS and AI offer potential improvements in our ability to identify people at high risk of sight loss from glaucoma and may improve the viability of screening for this important disease. PURPOSE To review the current evidence and concepts around screening for glaucoma. METHODS/RESULTS A group of glaucoma-focused clinician scientists drew on knowledge and experience around glaucoma, its etiology, and the options for screening. Glaucoma is a chronic progressive optic neuropathy affecting around 76 million individuals worldwide and is the leading cause of irreversible blindness globally. Early stages of the disease are asymptomatic meaning a substantial proportion of cases remain undiagnosed. Early detection and timely intervention reduce the risk of glaucoma-related visual morbidity. However, imperfect tests and a relatively low prevalence currently limit the viability of population-based screening approaches. The diagnostic yield of opportunistic screening strategies, relying on the identification of disease during unrelated health care encounters, such as cataract clinics and diabetic retinopathy screening programs, focusing on older people and/or those with a family history, are hindered by a large number of false-positive and false-negative results. Polygenic risk scores (PRS) offer personalized risk assessment for adult-onset glaucoma. In addition, artificial intelligence (AI) algorithms have shown impressive performance, comparable to expert humans, in discriminating between potentially glaucomatous and non-glaucomatous eyes. These emerging technologies may offer a meaningful improvement in diagnostic yield in glaucoma screening. CONCLUSIONS While glaucoma is a leading cause of irreversible vision loss, it presents technical challenges in the design and implementation of screening. New technologies such as PRS and AI offer potential improvements in our ability to identify people at high risk of sight loss from glaucoma and may improve the viability of screening for this important disease.
Collapse
Affiliation(s)
| | - Kelsey Stuart
- Ocular Informatics Group, Population and Data Sciences Research Theme, University College London Institute of Ophthalmology
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology
| | - Winifred P Nolan
- Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, London, UK
| | - Anthony P Khawaja
- Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust
- Ocular Informatics Group, Population and Data Sciences Research Theme, University College London Institute of Ophthalmology
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology
| | - Paul J Foster
- Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust
- Ocular Informatics Group, Population and Data Sciences Research Theme, University College London Institute of Ophthalmology
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology
| |
Collapse
|
2
|
Mahmoudinezhad G, Moghimi S, Cheng J, Ru L, Yang D, Agrawal K, Dixit R, Beheshtaein S, Du KH, Latif K, Gunasegaran G, Micheletti E, Nishida T, Kamalipour A, Walker E, Christopher M, Zangwill L, Vasconcelos N, Weinreb RN. Deep Learning Estimation of 10-2 Visual Field Map Based on Macular Optical Coherence Tomography Angiography Measurements. Am J Ophthalmol 2024; 257:187-200. [PMID: 37734638 DOI: 10.1016/j.ajo.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 09/07/2023] [Accepted: 09/13/2023] [Indexed: 09/23/2023]
Abstract
PURPOSE To develop deep learning (DL) models estimating the central visual field (VF) from optical coherence tomography angiography (OCTA) vessel density (VD) measurements. DESIGN Development and validation of a deep learning model. METHODS A total of 1051 10-2 VF OCTA pairs from healthy, glaucoma suspects, and glaucoma eyes were included. DL models were trained on en face macula VD images from OCTA to estimate 10-2 mean deviation (MD), pattern standard deviation (PSD), 68 total deviation (TD) and pattern deviation (PD) values and compared with a linear regression (LR) model with the same input. Accuracy of the models was evaluated by calculating the average mean absolute error (MAE) and the R2 (squared Pearson correlation coefficients) of the estimated and actual VF values. RESULTS DL models predicting 10-2 MD achieved R2 of 0.85 (95% confidence interval [CI], 74-0.92) for 10-2 MD and MAEs of 1.76 dB (95% CI, 1.39-2.17 dB) for MD. This was significantly better than mean linear estimates for 10-2 MD. The DL model outperformed the LR model for the estimation of pointwise TD values with an average MAE of 2.48 dB (95% CI, 1.99-3.02) and R2 of 0.69 (95% CI, 0.57-0.76) over all test points. The DL model outperformed the LR model for the estimation of all sectors. CONCLUSIONS DL models enable the estimation of VF loss from OCTA images with high accuracy. Applying DL to the OCTA images may enhance clinical decision making. It also may improve individualized patient care and risk stratification of patients who are at risk for central VF damage.
Collapse
Affiliation(s)
- Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Sasan Moghimi
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Jiacheng Cheng
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Liyang Ru
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Dongchen Yang
- Department of Computer Science and Engineering (D.Y.), University of California San Diego, La Jolla, California
| | - Kushagra Agrawal
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Rajeev Dixit
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | | | - Kelvin H Du
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Kareem Latif
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Gopikasree Gunasegaran
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Eleonora Micheletti
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Takashi Nishida
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Alireza Kamalipour
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Evan Walker
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Mark Christopher
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Linda Zangwill
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Nuno Vasconcelos
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Robert N Weinreb
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California.
| |
Collapse
|
3
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
4
|
Crincoli E, Sacconi R, Querques G. Reshaping the use of Artificial Intelligence in Ophthalmology: Sometimes you Need to go Backwards. Retina 2023; 43:1429-1432. [PMID: 37343295 DOI: 10.1097/iae.0000000000003878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Affiliation(s)
- Emanuele Crincoli
- Department of Ophthalmology, University Vita-Salute, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | | |
Collapse
|
5
|
Nagasato D, Sogawa T, Tanabe M, Tabuchi H, Numa S, Oishi A, Ohashi Ikeda H, Tsujikawa A, Maeda T, Takahashi M, Ito N, Miura G, Shinohara T, Egawa M, Mitamura Y. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol 2023; 141:305-313. [PMID: 36821134 PMCID: PMC9951103 DOI: 10.1001/jamaophthalmol.2022.6393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Importance There is no widespread effective treatment to halt the progression of retinitis pigmentosa. Consequently, adequate assessment and estimation of residual visual function are important clinically. Objective To examine whether deep learning can accurately estimate the visual function of patients with retinitis pigmentosa by using ultra-widefield fundus images obtained on concurrent visits. Design, Setting, and Participants Data for this multicenter, retrospective, cross-sectional study were collected between January 1, 2012, and December 31, 2018. This study included 695 consecutive patients with retinitis pigmentosa who were examined at 5 institutions. Each of the 3 types of input images-ultra-widefield pseudocolor images, ultra-widefield fundus autofluorescence images, and both ultra-widefield pseudocolor and fundus autofluorescence images-was paired with 1 of the 31 types of ensemble models constructed from 5 deep learning models (Visual Geometry Group-16, Residual Network-50, InceptionV3, DenseNet121, and EfficientNetB0). We used 848, 212, and 214 images for the training, validation, and testing data, respectively. All data from 1 institution were used for the independent testing data. Data analysis was performed from June 7, 2021, to December 5, 2022. Main Outcomes and Measures The mean deviation on the Humphrey field analyzer, central retinal sensitivity, and best-corrected visual acuity were estimated. The image type-ensemble model combination that yielded the smallest mean absolute error was defined as the model with the best estimation accuracy. After removal of the bias of including both eyes with the generalized linear mixed model, correlations between the actual values of the testing data and the estimated values by the best accuracy model were examined by calculating standardized regression coefficients and P values. Results The study included 1274 eyes of 695 patients. A total of 385 patients were female (55.4%), and the mean (SD) age was 53.9 (17.2) years. Among the 3 types of images, the model using ultra-widefield fundus autofluorescence images alone provided the best estimation accuracy for mean deviation, central sensitivity, and visual acuity. Standardized regression coefficients were 0.684 (95% CI, 0.567-0.802) for the mean deviation estimation, 0.697 (95% CI, 0.590-0.804) for the central sensitivity estimation, and 0.309 (95% CI, 0.187-0.430) for the visual acuity estimation (all P < .001). Conclusions and Relevance Results of this study suggest that the visual function estimation in patients with retinitis pigmentosa from ultra-widefield fundus autofluorescence images using deep learning might help assess disease progression objectively. Findings also suggest that deep learning models might monitor the progression of retinitis pigmentosa efficiently during follow-up.
Collapse
Affiliation(s)
- Daisuke Nagasato
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Takahiro Sogawa
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Shogo Numa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akio Oishi
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan,Department of Ophthalmology and Visual Sciences, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Hanako Ohashi Ikeda
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akitaka Tsujikawa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Tadao Maeda
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan
| | - Masayo Takahashi
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan,Vision Care Inc, Kobe, Japan
| | - Nana Ito
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Gen Miura
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Terumi Shinohara
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Mariko Egawa
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
6
|
Superpixel-Based Optic Nerve Head Segmentation Method of Fundus Images for Glaucoma Assessment. Diagnostics (Basel) 2022; 12:diagnostics12123210. [PMID: 36553217 PMCID: PMC9777478 DOI: 10.3390/diagnostics12123210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 12/14/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Glaucoma disease is the second leading cause of blindness in the world. This progressive ocular neuropathy is mainly caused by uncontrolled high intraocular pressure. Although there is still no cure, early detection and appropriate treatment can stop the disease progression to low vision and blindness. In the clinical practice, the gold standard used by ophthalmologists for glaucoma diagnosis is fundus retinal imaging, in particular optic nerve head (ONH) subjective/manual examination. In this work, we propose an unsupervised superpixel-based method for the optic nerve head (ONH) segmentation. An automatic algorithm based on linear iterative clustering is used to compute an ellipse fitting for the automatic detection of the ONH contour. The tool has been tested using a public retinal fundus images dataset with medical expert ground truths of the ONH contour and validated with a classified (control vs. glaucoma eyes) database. Results showed that the automatic segmentation method provides similar results in ellipse fitting of the ONH that those obtained from the ground truth experts within the statistical range of inter-observation variability. Our method is a user-friendly available program that provides fast and reliable results for clinicians working on glaucoma screening using retinal fundus images.
Collapse
|
7
|
Detecting multiple retinal diseases in ultra-widefield fundus imaging and data-driven identification of informative regions with deep learning. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00566-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
8
|
Martins TGDS, Schor P, Mendes LGA, Fowler S, Silva R. Use of artificial intelligence in ophthalmology: a narrative review. SAO PAULO MED J 2022; 140:837-845. [PMID: 36043665 PMCID: PMC9671570 DOI: 10.1590/1516-3180.2021.0713.r1.22022022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 02/22/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) deals with development of algorithms that seek to perceive one's environment and perform actions that maximize one's chance of successfully reaching one's predetermined goals. OBJECTIVE To provide an overview of the basic principles of AI and its main studies in the fields of glaucoma, retinopathy of prematurity, age-related macular degeneration and diabetic retinopathy. From this perspective, the limitations and potential challenges that have accompanied the implementation and development of this new technology within ophthalmology are presented. DESIGN AND SETTING Narrative review developed by a research group at the Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil. METHODS We searched the literature on the main applications of AI within ophthalmology, using the keywords "artificial intelligence", "diabetic retinopathy", "macular degeneration age-related", "glaucoma" and "retinopathy of prematurity," covering the period from January 1, 2007, to May 3, 2021. We used the MEDLINE database (via PubMed) and the LILACS database (via Virtual Health Library) to identify relevant articles. RESULTS We retrieved 457 references, of which 47 were considered eligible for intensive review and critical analysis. CONCLUSION Use of technology, as embodied in AI algorithms, is a way of providing an increasingly accurate service and enhancing scientific research. This forms a source of complement and innovation in relation to the daily skills of ophthalmologists. Thus, AI adds technology to human expertise.
Collapse
Affiliation(s)
- Thiago Gonçalves dos Santos Martins
- MD, PhD. Researcher, Department of Ophthalmology, Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil; Research Fellow, Department of Ophthalmology, Ludwig Maximilians University (LMU), Munich, Germany; and Doctoral Student, University of Coimbra (UC), Coimbra, Portugal
| | - Paulo Schor
- PhD. Professor, Department of Ophthalmology, Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil
| | | | - Susan Fowler
- RN, PhD. Certified Neuroscience Registered Nurse (CNRN) and Research Fellow of American Heart Association, Department of Ophthalmology, Orlando Health, Orlando, United States; Researcher, Department of Ophthalmology, Walden University, Minneapolis (MN), United States; and Researcher, Department of Ophthalmology, Thomas Edison State University (TESU), Trenton (NJ), United States
| | - Rufino Silva
- MD, PhD. Fellow of the European Board of Ophthalmology and Professor, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal; Fellow, Department of Ophthalmology, Centro Hospitalar e Universitário de Coimbra (CHUC), Coimbra, Portugal; and Researcher, Association for Innovation and Biomedical Research on Light and Image (AIBILI), Coimbra, Portugal
| |
Collapse
|
9
|
Practical Application of Artificial Intelligence Technology in Glaucoma Diagnosis. J Ophthalmol 2022; 2022:5212128. [PMID: 35957747 PMCID: PMC9357716 DOI: 10.1155/2022/5212128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 06/29/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose. By comparing the performance of different models between artificial intelligence (AI) and doctors, we aim to evaluate and identify the optimal model for future usage of AI. Methods. A total of 500 fundus images of glaucoma and 500 fundus images of normal eyes were collected and randomly divided into five groups, with each group corresponding to one round. The AI system provided diagnostic suggestions for each image. Four doctors provided diagnoses without the assistance of the AI in the first round and with the assistance of the AI in the second and third rounds. In the fourth round, doctor B and doctor D made diagnoses with the help of the AI and the other two doctors without the help of the AI. In the last round, doctor A and doctor B made diagnoses with the help of AI and the other two doctors without the help of the AI. Results. Doctor A, doctor B, and doctor D had a higher accuracy in the diagnosis of glaucoma with the assistance of AI in the second (
,
, and
) and the third round (
,
, and
) than in the first round. The accuracy of at least one doctor was higher than that of AI in the second and third rounds, in spite of no detectable significance (
,
,
, and
). The four doctors’ overall accuracy (
and
) and sensitivity (
and
) as a whole were significantly improved in the second and third rounds. Conclusions. This “Doctor + AI” model can clarify the role of doctors and AI in medical responsibility and ensure the safety of patients, and importantly, this model shows great potential and application prospects.
Collapse
|
10
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada. .,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
11
|
Atalay E, Özalp O, Devecioğlu ÖC, Erdoğan H, İnce T, Yıldırım N. Investigation of the Role of Convolutional Neural Network Architectures in the Diagnosis of Glaucoma using Color Fundus Photography. Turk J Ophthalmol 2022; 52:193-200. [PMID: 35770344 PMCID: PMC9249112 DOI: 10.4274/tjo.galenos.2021.29726] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Objectives: To evaluate the performance of convolutional neural network (CNN) architectures to distinguish eyes with glaucoma from normal eyes. Materials and Methods: A total of 9,950 fundus photographs of 5,388 patients from the database of Eskişehir Osmangazi University Faculty of Medicine Ophthalmology Clinic were labelled as glaucoma, glaucoma suspect, or normal by three different experienced ophthalmologists. The categorized fundus photographs were evaluated using a state-of-the-art two-dimensional CNN and compared with deep residual networks (ResNet) and very deep neural networks (VGG). The accuracy, sensitivity, and specificity of glaucoma detection with the different algorithms were evaluated using a dataset of 238 normal and 320 glaucomatous fundus photographs. For the detection of suspected glaucoma, ResNet-101 architectures were tested with a data set of 170 normal, 170 glaucoma, and 167 glaucoma-suspect fundus photographs. Results: Accuracy, sensitivity, and specificity in detecting glaucoma were 96.2%, 99.5%, and 93.7% with ResNet-50; 97.4%, 97.8%, and 97.1% with ResNet-101; 98.9%, 100%, and 98.1% with VGG-19, and 99.4%, 100%, and 99% with the 2D CNN, respectively. Accuracy, sensitivity, and specificity values in distinguishing glaucoma suspects from normal eyes were 62%, 68%, and 56% and those for differentiating glaucoma from suspected glaucoma were 92%, 81%, and 97%, respectively. While 55 photographs could be evaluated in 2 seconds with CNN, a clinician spent an average of 24.2 seconds to evaluate a single photograph. Conclusion: An appropriately designed and trained CNN was able to distinguish glaucoma with high accuracy even with a small number of fundus photographs.
Collapse
|
12
|
Tabuchi H. Understanding required to consider AI applications to the field of ophthalmology. Taiwan J Ophthalmol 2022; 12:123-129. [PMID: 35813809 PMCID: PMC9262026 DOI: 10.4103/tjo.tjo_8_22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 01/28/2022] [Indexed: 11/04/2022] Open
Abstract
Applications of artificial intelligence technology, especially deep learning, in ophthalmology research have started with the diagnosis of diabetic retinopathy and have now expanded to all areas of ophthalmology, mainly in the identification of fundus diseases such as glaucoma and age-related macular degeneration. In addition to fundus photography, optical coherence tomography is often used as an imaging device. In addition to simple binary classification, region identification (segmentation model) is used as an identification method for interpretability. Furthermore, there have been AI applications in the area of regression estimation, which is different from diagnostic identification. While expectations for deep learning AI are rising, regulatory agencies have begun issuing guidance on the medical applications of AI. The reason behind this trend is that there are a number of existing issues regarding the application of AI that need to be considered, including, but not limited to, the handling of personal information by large technology companies, the black-box issue, the flaming issue, the theory of responsibility, and issues related to improving the performance of commercially available AI. Furthermore, researchers have reported that there are a plethora of issues that simply cannot be solved by the high performance of artificial intelligence models, such as educating users and securing the communication environment, which are just a few of the necessary steps toward the actual implementation process of an AI society. Multifaceted perspectives and efforts are needed to create better ophthalmology care through AI.
Collapse
|
13
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
14
|
Tabuchi H, Nagasato D, Masumoto H, Tanabe M, Ishitobi N, Ochi H, Shimizu Y, Kiuchi Y. Developing an iOS application that uses machine learning for the automated diagnosis of blepharoptosis. Graefes Arch Clin Exp Ophthalmol 2021; 260:1329-1335. [PMID: 34734349 DOI: 10.1007/s00417-021-05475-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 10/15/2021] [Accepted: 10/21/2021] [Indexed: 11/27/2022] Open
Abstract
PURPOSE To assess the performance of artificial intelligence in the automated classification of images taken with a tablet device of patients with blepharoptosis and subjects with normal eyelid. METHODS This is a prospective and observational study. A total of 1276 eyelid images (624 images from 347 blepharoptosis cases and 652 images from 367 normal controls) from 606 participants were analyzed. In order to obtain a sufficient number of images for analysis, 1 to 4 eyelid images were obtained from each participant. We developed a model by fully retraining the pre-trained MobileNetV2 convolutional neural network. Subsequently, we verified whether the automatic diagnosis of blepharoptosis was possible using the images. In addition, we visualized how the model captured the features of the test data with Score-CAM. k-fold cross-validation (k = 5) was adopted for splitting the training and validation. Sensitivity, specificity, and the area under the curve (AUC) of the receiver operating characteristic curve for detecting blepharoptosis were examined. RESULTS We found the model had a sensitivity of 83.0% (95% confidence interval [CI], 79.8-85.9) and a specificity of 82.5% (95% CI, 79.4-85.4). The accuracy of the validation data was 82.8%, and the AUC was 0.900 (95% CI, 0.882-0.917). CONCLUSION Artificial intelligence was able to classify with high accuracy images of blepharoptosis and normal eyelids taken using a tablet device. Thus, the diagnosis of blepharoptosis with a tablet device is possible at a high level of accuracy. TRIAL REGISTRATION Date of registration: 2021-06-25. TRIAL REGISTRATION NUMBER UMIN000044660. Registration site: https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr_view.cgi?recptno=R000051004.
Collapse
Affiliation(s)
- Hitoshi Tabuchi
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan.,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Daisuke Nagasato
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan. .,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.
| | - Hiroki Masumoto
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan.,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Naofumi Ishitobi
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan
| | - Hiroki Ochi
- Department of Medicine, Hiroshima University, Hiroshima, Japan
| | - Yoshie Shimizu
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Yoshiaki Kiuchi
- Department of Ophthalmology and Visual Sciences, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
15
|
Christopher M, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM. Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics Based on Thickness Maps from Macula OCT. Ophthalmology 2021; 128:1534-1548. [PMID: 33901527 DOI: 10.1016/j.ophtha.2021.04.022] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 03/16/2021] [Accepted: 04/19/2021] [Indexed: 01/27/2023] Open
Abstract
PURPOSE To develop deep learning (DL) systems estimating visual function from macula-centered spectral-domain (SD) OCT images. DESIGN Evaluation of a diagnostic technology. PARTICIPANTS A total of 2408 10-2 visual field (VF) SD OCT pairs and 2999 24-2 VF SD OCT pairs collected from 645 healthy and glaucoma subjects (1222 eyes). METHODS Deep learning models were trained on thickness maps from Spectralis macula SD OCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). Individual and combined DL models were trained using thickness data from 6 layers (retinal nerve fiber layer [RNFL], ganglion cell layer [GCL], inner plexiform layer [IPL], ganglion cell-IPL [GCIPL], ganglion cell complex [GCC] and retina). Linear regression of mean layer thicknesses were used for comparison. MAIN OUTCOME MEASURES Deep learning models were evaluated using R2 and mean absolute error (MAE) compared with 10-2 and 24-2 VF measurements. RESULTS Combined DL models estimating 10-2 achieved R2 of 0.82 (95% confidence interval [CI], 0.68-0.89) for MD and 0.69 (95% CI, 0.55-0.81) for PSD and MAEs of 1.9 dB (95% CI, 1.6-2.4 dB) for MD and 1.5 dB (95% CI, 1.2-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [95% CI, 0.47-0.71] and 3.0 dB [95% CI, 2.5-3.5 dB]) and 10-2 PSD (0.46 [95% CI, 0.31-0.60] and 2.3 dB [95% CI, 1.8-2.7 dB]). Combined DL models estimating 24-2 achieved R2 of 0.79 (95% CI, 0.72-0.84) for MD and 0.68 (95% CI, 0.53-0.79) for PSD and MAEs of 2.1 dB (95% CI, 1.8-2.5 dB) for MD and 1.5 dB (95% CI, 1.3-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [95% CI, 0.26-0.57] and 3.4 dB [95% CI, 2.7-4.5 dB]) and 24-2 PSD (0.38 [95% CI, 0.20-0.57] and 2.4 dB [95% CI, 2.0-2.8 dB]). The GCIPL (R2 = 0.79) and GCC (R2 = 0.75) had the highest performance estimating 10-2 and 24-2 MD, respectively. CONCLUSIONS Deep learning models improved estimates of functional loss from SD OCT imaging. Accurate estimates can help clinicians to individualize VF testing to patients.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Department of Ophthalmology, University Medical Center Mainz, Mainz, Germany
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | | | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California.
| |
Collapse
|
16
|
Wong YL, Noor M, James KL, Aslam TM. Ophthalmology Going Greener: A Narrative Review. Ophthalmol Ther 2021; 10:845-857. [PMID: 34633635 PMCID: PMC8502635 DOI: 10.1007/s40123-021-00404-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 09/28/2021] [Indexed: 11/30/2022] Open
Abstract
The combined effects of fossil fuel combustion, mass agricultural production and deforestation, industrialisation and the evolution of modern transport systems have resulted in high levels of carbon emissions and accumulation of greenhouse gases, causing profound climate change and ozone layer depletion. The consequential depletion of Earth's natural ecosystems and biodiversity is not only a devastating loss but a threat to human health. Sustainability-the ability to continue activities indefinitely-underpins the principal solutions to these problems. Globally, the healthcare sector is a major contributor to carbon emissions, with waste production and transport systems being amongst the highest contributing factors. The aim of this review is to explore modalities by which the healthcare sector, particularly ophthalmology, can reduce carbon emissions, related costs and overall environmental impact, whilst maintaining a high standard of patient care.
Collapse
Affiliation(s)
- Yee Ling Wong
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK.
| | - Maha Noor
- Manchester University NHS Foundation Trust, Manchester, UK
| | - Katherine L James
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK
| | - Tariq M Aslam
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK.,School of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| |
Collapse
|
17
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
19
|
Zhang W, Zhao X, Chen Y, Zhong J, Yi Z. DeepUWF: An Automated Ultra-Wide-Field Fundus Screening System via Deep Learning. IEEE J Biomed Health Inform 2021; 25:2988-2996. [PMID: 33361011 DOI: 10.1109/jbhi.2020.3046771] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The emerging ultra-wide field of view (UWF) fundus color imaging is a powerful tool for fundus screening. However, manual screening is labor-intensive and subjective. Based on 2644 UWF images, a set of early fundus abnormal screening system named DeepUWF is developed. DeepUWF includes an abnormal fundus screening subsystem and a disease diagnosis subsystem for three kinds of fundus diseases (retinal tear & retinal detachment, diabetic retinopathy and pathological myopia). The components in the system are composed of a set of excellent convolutional neural networks and two custom classifiers. However, the contrast of UWF images used in the research is low, which seriously limits the extraction of fine features of UWF images by depth model. Therefore, the high specificity and low sensitivity of prediction results have always been difficult problems in research. In order to solve this problem, six kinds of image preprocessing techniques are adopted, and their effects on the prediction performance of fundus abnormal and three kinds of fundus diseases models are studied. A variety of experimental indicators are used to evaluate the algorithms for validity and reliability. The experimental results show that these preprocessing methods are helpful to improve the learning ability of the networks and achieve good sensitivity and specificity. Without ophthalmologists, DeepUWF has potential application value, which is helpful for fundus health screening and workflow improvement.
Collapse
|
20
|
Glaucoma classification based on scanning laser ophthalmoscopic images using a deep learning ensemble method. PLoS One 2021; 16:e0252339. [PMID: 34086716 PMCID: PMC8177489 DOI: 10.1371/journal.pone.0252339] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 05/12/2021] [Indexed: 12/21/2022] Open
Abstract
This study aimed to assess the utility of optic nerve head (onh) en-face images, captured with scanning laser ophthalmoscopy (slo) during standard optical coherence tomography (oct) imaging of the posterior segment, and demonstrate the potential of deep learning (dl) ensemble method that operates in a low data regime to differentiate glaucoma patients from healthy controls. The two groups of subjects were initially categorized based on a range of clinical tests including measurements of intraocular pressure, visual fields, oct derived retinal nerve fiber layer (rnfl) thickness and dilated stereoscopic examination of onh. 227 slo images of 227 subjects (105 glaucoma patients and 122 controls) were used. A new task-specific convolutional neural network architecture was developed for slo image-based classification. To benchmark the results of the proposed method, a range of classifiers were tested including five machine learning methods to classify glaucoma based on rnfl thickness—a well-known biomarker in glaucoma diagnostics, ensemble classifier based on inception v3 architecture, and classifiers based on features extracted from the image. The study shows that cross-validation dl ensemble based on slo images achieved a good discrimination performance with up to 0.962 of balanced accuracy, outperforming all of the other tested classifiers.
Collapse
|
21
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
22
|
Use of a Machine Learning Method in Predicting Refraction after Cataract Surgery. J Clin Med 2021; 10:jcm10051103. [PMID: 33800825 PMCID: PMC7961666 DOI: 10.3390/jcm10051103] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/26/2021] [Accepted: 03/01/2021] [Indexed: 11/17/2022] Open
Abstract
The present study aims to describe the use of machine learning (ML) in predicting the occurrence of postoperative refraction after cataract surgery and compares the accuracy of this method to conventional intraocular lens (IOL) power calculation formulas. In total, 3331 eyes from 2010 patients were assessed. The objects were divided into training data and test data. The constants for the IOL power calculation formulas and model training for ML were optimized using training data. Then, the occurrence of postoperative refraction was predicted using conventional formulas, or ML models were calculated using the test data. We evaluated the SRK/T formula, Haigis formula, Holladay 1 formula, Hoffer Q formula, and Barrett Universal II formula (BU-II); similar to ML methods, we assessed support vector regression (SVR), random forest regression (RFR), gradient boosting regression (GBR), and neural network (NN). Among the conventional formulas, BU-II had the lowest mean and median absolute error of prediction. Therefore, we compared the accuracy of our method with that of BU-II. The absolute errors of some ML methods were lower than those of BU-II. However, no statistically significant difference was observed. Thus, the accuracy of our method was not inferior to that of BU-II.
Collapse
|
23
|
Imamura H, Tabuchi H, Nagasato D, Masumoto H, Baba H, Furukawa H, Maruoka S. Automatic screening of tear meniscus from lacrimal duct obstructions using anterior segment optical coherence tomography images by deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:1569-1577. [PMID: 33576859 DOI: 10.1007/s00417-021-05078-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/23/2020] [Accepted: 01/06/2021] [Indexed: 10/22/2022] Open
Abstract
PURPOSE We assessed the ability of deep learning (DL) models to distinguish between tear meniscus of lacrimal duct obstruction (LDO) patients and normal subjects using anterior segment optical coherence tomography (ASOCT) images. METHODS The study included 117 ASOCT images (19 men and 98 women; mean age, 66.6 ± 13.6 years) from 101 LDO patients and 113 ASOCT images (29 men and 84 women; mean age, 38.3 ± 19.9 years) from 71 normal subjects. We trained to construct 9 single and 502 ensemble DL models with 9 different network structures, and calculated the area under the curve (AUC), sensitivity, and specificity to compare the distinguishing abilities of these single and ensemble DL models. RESULTS For the highest single DL model (DenseNet169), the AUC, sensitivity, and specificity for distinguishing LDO were 0.778, 64.6%, and 72.1%, respectively. For the highest ensemble DL model (VGG16, ResNet50, DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, and Xception), the AUC, sensitivity, and specificity for distinguishing LDO were 0.824, 84.8%, and 58.8%, respectively. The heat maps indicated that these DL models placed their focus on the tear meniscus region of the ASOCT images. CONCLUSION The combination of DL and ASOCT images could distinguish between tear meniscus of LDO patients and normal subjects with a high level of accuracy. These results suggest that DL might be useful for automatic screening of patients for LDO.
Collapse
Affiliation(s)
- Hitoshi Imamura
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan. .,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan.
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroaki Baba
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hiroki Furukawa
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Sachiko Maruoka
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| |
Collapse
|
24
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
25
|
Prediction of age and brachial-ankle pulse-wave velocity using ultra-wide-field pseudo-color images by deep learning. Sci Rep 2020; 10:19369. [PMID: 33168888 PMCID: PMC7652944 DOI: 10.1038/s41598-020-76513-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/29/2020] [Indexed: 12/22/2022] Open
Abstract
This study examined whether age and brachial-ankle pulse-wave velocity (baPWV) can be predicted with ultra-wide-field pseudo-color (UWPC) images using deep learning (DL). We examined 170 UWPC images of both eyes of 85 participants (40 men and 45 women, mean age: 57.5 ± 20.9 years). Three types of images were included (total, central, and peripheral) and analyzed by k-fold cross-validation (k = 5) using Visual Geometry Group-16. After bias was eliminated using the generalized linear mixed model, the standard regression coefficients (SRCs) between actual age and baPWV and predicted age and baPWV from the UWPC images by the neural network were calculated, and the prediction accuracies of the DL model for age and baPWV were examined. The SRC between actual age and predicted age by the neural network was 0.833 for all images, 0.818 for central images, and 0.649 for peripheral images (all P < 0.001) and between the actual baPWV and the predicted baPWV was 0.390 for total images, 0.419 for central images, and 0.312 for peripheral images (all P < 0.001). These results show the potential prediction capability of DL for age and vascular aging and could be useful for disease prevention and early treatment.
Collapse
|
26
|
Seo SB, Cho HK. Deep learning classification of early normal-tension glaucoma and glaucoma suspects using Bruch's membrane opening-minimum rim width and RNFL. Sci Rep 2020; 10:19042. [PMID: 33149191 PMCID: PMC7643070 DOI: 10.1038/s41598-020-76154-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/14/2020] [Indexed: 12/22/2022] Open
Abstract
We aimed to classify early normal-tension glaucoma (NTG) and glaucoma suspect (GS) using Bruch’s membrane opening-minimum rim width (BMO-MRW), peripapillary retinal nerve fiber layer (RNFL), and the color classification of RNFL based on a deep-learning model. Discriminating early-stage glaucoma and GS is challenging and a deep-learning model may be helpful to clinicians. NTG accounts for an average 77% of open-angle glaucoma in Asians. BMO-MRW is a new structural parameter that has advantages in assessing neuroretinal rim tissue more accurately than conventional parameters. A dataset consisted of 229 eyes out of 277 GS and 168 eyes of 285 patients with early NTG. A deep-learning algorithm was developed to discriminate between GS and early NTG using a training set, and its accuracy was validated in the testing dataset using the area under the curve (AUC) of the receiver operating characteristic curve (ROC). The deep neural network model (DNN) achieved highest diagnostic performance, with an AUC of 0.966 (95%confidence interval 0.929–1.000) in classifying either GS or early NTG, while AUCs of 0.927–0.947 were obtained by other machine-learning models. The performance of the DNN model considering all three OCT-based parameters was the highest (AUC 0.966) compared to the combinations of just two parameters. As a single parameter, BMO-MRW (0.959) performed better than RNFL alone (0.914).
Collapse
Affiliation(s)
- Sat Byul Seo
- Department of Mathematics Education, School of Education, Kyungnam University, Changwon, Republic of Korea
| | - Hyun-Kyung Cho
- Department of Ophthalmology, Gyeongsang National University Changwon Hospital, Gyeongsang National University, School of Medicine, 11 Samjeongja-ro, Seongsan-gu, Changwon, Gyeongsangnam-do, 51472, Republic of Korea. .,Institute of Health Sciences, School of Medicine, Gyeongsang National University, Jinju, Republic of Korea.
| |
Collapse
|
27
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
28
|
Campbell CG, Ting DSW, Keane PA, Foster PJ. The potential application of artificial intelligence for diagnosis and management of glaucoma in adults. Br Med Bull 2020; 134:21-33. [PMID: 32518944 DOI: 10.1093/bmb/ldaa012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 04/02/2020] [Accepted: 04/02/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results.
Collapse
Affiliation(s)
- Cara G Campbell
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
| | - Daniel S W Ting
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
| | - Pearse A Keane
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| | - Paul J Foster
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| |
Collapse
|
29
|
Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Med Image Anal 2020; 63:101695. [DOI: 10.1016/j.media.2020.101695] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 02/02/2020] [Accepted: 03/30/2020] [Indexed: 01/12/2023]
|
30
|
Sogawa T, Tabuchi H, Nagasato D, Masumoto H, Ikuno Y, Ohsugi H, Ishitobi N, Mitamura Y. Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography. PLoS One 2020; 15:e0227240. [PMID: 32298265 PMCID: PMC7161961 DOI: 10.1371/journal.pone.0227240] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/29/2020] [Indexed: 12/20/2022] Open
Abstract
This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined. The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.
Collapse
Affiliation(s)
- Takahiro Sogawa
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | | | | | | | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
31
|
Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea 2020; 39:720-725. [PMID: 32040007 DOI: 10.1097/ico.0000000000002279] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
32
|
Mayro EL, Wang M, Elze T, Pasquale LR. The impact of artificial intelligence in the diagnosis and management of glaucoma. Eye (Lond) 2020; 34:1-11. [PMID: 31541215 PMCID: PMC7002653 DOI: 10.1038/s41433-019-0577-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 08/07/2019] [Indexed: 12/12/2022] Open
Abstract
Deep learning (DL) is a subset of artificial intelligence (AI), which uses multilayer neural networks modelled after the mammalian visual cortex capable of synthesizing images in ways that will transform the field of glaucoma. Autonomous DL algorithms are capable of maximizing information embedded in digital fundus photographs and ocular coherence tomographs to outperform ophthalmologists in disease detection. Other unsupervised algorithms such as principal component analysis (axis learning) and archetypal analysis (corner learning) facilitate visual field interpretation and show great promise to detect functional glaucoma progression and differentiate it from non-glaucomatous changes when compared with conventional software packages. Forecasting tools such as the Kalman filter may revolutionize glaucoma management by accounting for a host of factors to set target intraocular pressure goals that preserve vision. Activation maps generated from DL algorithms that process glaucoma data have the potential to efficiently direct our attention to critical data elements embedded in high throughput data and enhance our understanding of the glaucomatous process. It is hoped that AI will realize more accurate assessment of the copious data encountered in glaucoma management, improving our understanding of the disease, preserving vision, and serving to enhance the deep bonds that patients develop with their treating physicians.
Collapse
Affiliation(s)
- Eileen L Mayro
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA.
| | - Mengyu Wang
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Tobias Elze
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
33
|
Ting DS, Peng L, Varadarajan AV, Keane PA, Burlina PM, Chiang MF, Schmetterer L, Pasquale LR, Bressler NM, Webster DR, Abramoff M, Wong TY. Deep learning in ophthalmology: The technical and clinical considerations. Prog Retin Eye Res 2019; 72:100759. [DOI: 10.1016/j.preteyeres.2019.04.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2018] [Revised: 04/21/2019] [Accepted: 04/23/2019] [Indexed: 12/22/2022]
|
34
|
Severity Classification of Conjunctival Hyperaemia by Deep Neural Network Ensembles. J Ophthalmol 2019; 2019:7820971. [PMID: 31275636 PMCID: PMC6589312 DOI: 10.1155/2019/7820971] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 05/10/2019] [Indexed: 12/23/2022] Open
Abstract
Conjunctival hyperaemia is a common clinical ophthalmological finding and can be a symptom of various ocular disorders. Although several severity classification criteria have been proposed, none include objective severity criteria. Neural networks and deep learning have been utilised in ophthalmology, but not for the purpose of classifying the severity of conjunctival hyperaemia objectively. To develop a conjunctival hyperaemia grading software, we used 3700 images as the training data and 923 images as the validation test data. We trained the nine neural network models and validated the performance of these networks. We finally chose the best combination of these networks. The DenseNet201 model was the best individual model. The combination of the DenseNet201, DenseNet121, VGG19, and ResNet50 were the best model. The correlation between the multimodel responses, and the vessel-area occupied was 0.737 (p < 0.01). This system could be as accurate and comprehensive as specialists but would be significantly faster and consistent with objective values.
Collapse
|