1
|
Chambellant F, Gaveau J, Papaxanthis C, Thomas E. Deactivation and collective phasic muscular tuning for pointing direction: Insights from machine learning. Heliyon 2024; 10:e33461. [PMID: 39050418 PMCID: PMC11268187 DOI: 10.1016/j.heliyon.2024.e33461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 06/20/2024] [Accepted: 06/21/2024] [Indexed: 07/27/2024] Open
Abstract
Arm movements in our daily lives have to be adjusted for several factors in response to the demands of the environment, for example, speed, direction or distance. Previous research has shown that arm movement kinematics is optimally tuned to take advantage of gravity effects and minimize muscle effort in various pointing directions and gravity contexts. Here we build upon these results and focus on muscular adjustments. We used Machine Learning to analyze the ensemble activities of multiple muscles recorded during pointing in various directions. The advantage of such a technique would be the observation of patterns in collective muscular activity that may not be noticed using univariate statistics. By providing an index of multimuscle activity, the Machine Learning (ML) analysis brought to light several features of tuning for pointing direction. In attempting to trace tuning curves, all comparisons were done with respects to pointing in the horizontal, gravity free plane. We demonstrated that tuning for direction does not take place in a uniform fashion but in a modular manner in which some muscle groups play a primary role. The antigravity muscles were more finely tuned to pointing direction than the gravity muscles. Of note, was their tuning during the first half of downward pointing. As the antigravity muscles were deactivated during this phase, it supported the idea that deactivation is not an on-off function but is tuned to pointing direction. Further support for the tuning of the negative portions of the phasic EMG was provided by the observation of progressively improving classification accuracies with increasing angular distance from the horizontal. We also demonstrated that the durations of these negative phases, without information on their amplitudes, is tuned to pointing directions. Overall, these results show that the motor system tunes muscle commands to exploit gravity effects and reduce muscular effort. It quantitatively demonstrates that phasic EMG negativity is an essential feature of muscle control. The ML analysis was done using Linear Discriminant analysis (LDA) and Support Vector Machines (SVM). The two led to the same conclusions concerning the movements being investigated, hence showing that the former, computationally inexpensive technique is a viable tool for regular investigation of motor control.
Collapse
|
2
|
Yan Y, Huang X, Jiang X, Gao Z, Liu X, Jin K, Ye J. Clinical evaluation of deep learning systems for assisting in the diagnosis of the epiretinal membrane grade in general ophthalmologists. Eye (Lond) 2024; 38:730-736. [PMID: 37848677 PMCID: PMC10920879 DOI: 10.1038/s41433-023-02765-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 08/28/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Epiretinal membrane (ERM) is a common age-related retinal disease detected by optical coherence tomography (OCT), with a prevalence of 34.1% among people over 60 years old. This study aims to develop artificial intelligence (AI) systems to assist in the diagnosis of ERM grade using OCT images and to clinically evaluate the potential benefits and risks of our AI systems with a comparative experiment. METHODS A segmentation deep learning (DL) model that segments retinal features associated with ERM severity and a classification DL model that grades the severity of ERM were developed based on an OCT dataset obtained from three hospitals. A comparative experiment was conducted to compare the performance of four general ophthalmologists with and without assistance from the AI in diagnosing ERM severity. RESULTS The segmentation network had a pixel accuracy (PA) of 0.980 and a mean intersection over union (MIoU) of 0.873, while the six-classification network had a total accuracy of 81.3%. The diagnostic accuracy scores of the four ophthalmologists increased with AI assistance from 81.7%, 80.7%, 78.0%, and 80.7% to 87.7%, 86.7%, 89.0%, and 91.3%, respectively, while the corresponding time expenditures were reduced. The specific results of the study as well as the misinterpretations of the AI systems were analysed. CONCLUSION Through our comparative experiment, the AI systems proved to be valuable references for medical diagnosis and demonstrated the potential to accelerate clinical workflows. Systematic efforts are needed to ensure the safe and rapid integration of AI systems into ophthalmic practice.
Collapse
Affiliation(s)
- Yan Yan
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoling Huang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xindi Liu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| |
Collapse
|
3
|
Choi JY, Ryu IH, Kim JK, Lee IS, Yoo TK. Development of a generative deep learning model to improve epiretinal membrane detection in fundus photography. BMC Med Inform Decis Mak 2024; 24:25. [PMID: 38273286 PMCID: PMC10811871 DOI: 10.1186/s12911-024-02431-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and development department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
4
|
Anandi L, Budihardja BM, Anggraini E, Badjrai RA, Nusanti S. The use of artificial intelligence in detecting papilledema from fundus photographs. Taiwan J Ophthalmol 2023; 13:184-190. [PMID: 37484606 PMCID: PMC10361430 DOI: 10.4103/tjo.tjo-d-22-00178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Accepted: 03/01/2023] [Indexed: 07/25/2023] Open
Abstract
Papilledema is an optic disc swelling with increased intracranial pressure as the underlying cause. Diagnosis of papilledema is made based on ophthalmoscopy findings. Although important, ophthalmoscopy can be challenging for general physicians and nonophthalmic specialists. Meanwhile, artificial intelligence (AI) has the potential to be a useful tool for the detection of fundus abnormalities, including papilledema. Even more, AI might also be useful in grading papilledema. We aim to review the latest advancement in the diagnosis of papilledema using AI and explore its potential. This review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. A systematic literature search was performed on four databases (PubMed, Cochrane, ProQuest, and Google Scholar) using the Keywords "AI" and "papilledema" including their synonyms. The literature search identified 372 articles, of which six met the eligibility criteria. Of the six articles included in this review, three articles assessed the use of AI for detecting papilledema, one article evaluated the use of AI for papilledema grading using Frisèn criteria, and two articles assessed the use of AI for both detection and grading. The models for both papilledema detection and grading had shown good diagnostic value, with high sensitivity (83.1%-99.82%), specificity (82.6%-98.65%), and accuracy (85.89%-99.89%). Even though studies regarding the use of AI in papilledema are still limited, AI has shown promising potential for papilledema detection and grading. Further studies will help provide more evidence to support the use of AI in clinical practice.
Collapse
Affiliation(s)
- Lazuardiah Anandi
- Department of Ophthalmology, Faculty of Medicine, Dr. Cipto Mangunkusumo Hospital, University of Indonesia, Jakarta, Indonesia
| | - Brigitta Marcia Budihardja
- Department of Ophthalmology, Faculty of Medicine, Dr. Cipto Mangunkusumo Hospital, University of Indonesia, Jakarta, Indonesia
| | - Erika Anggraini
- Department of Ophthalmology, Faculty of Medicine, Dr. Cipto Mangunkusumo Hospital, University of Indonesia, Jakarta, Indonesia
| | - Rona Ali Badjrai
- Department of Ophthalmology, Faculty of Medicine, Dr. Cipto Mangunkusumo Hospital, University of Indonesia, Jakarta, Indonesia
| | - Syntia Nusanti
- Department of Ophthalmology, Division of Neuro-Ophthalmology, Faculty of Medicine, Dr. Cipto Mangunkusumo Hospital, University of Indonesia, Jakarta, Indonesia
| |
Collapse
|
5
|
iERM: An Interpretable Deep Learning System to Classify Epiretinal Membrane for Different Optical Coherence Tomography Devices: A Multi-Center Analysis. J Clin Med 2023; 12:jcm12020400. [PMID: 36675327 PMCID: PMC9862104 DOI: 10.3390/jcm12020400] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/29/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
Background: Epiretinal membranes (ERM) have been found to be common among individuals >50 years old. However, the severity grading assessment for ERM based on optical coherence tomography (OCT) images has remained a challenge due to lacking reliable and interpretable analysis methods. Thus, this study aimed to develop a two-stage deep learning (DL) system named iERM to provide accurate automatic grading of ERM for clinical practice. Methods: The iERM was trained based on human segmentation of key features to improve classification performance and simultaneously provide interpretability to the classification results. We developed and tested iERM using a total of 4547 OCT B-Scans of four different commercial OCT devices that were collected from nine international medical centers. Results: As per the results, the integrated network effectively improved the grading performance by 1−5.9% compared with the traditional classification DL model and achieved high accuracy scores of 82.9%, 87.0%, and 79.4% in the internal test dataset and two external test datasets, respectively. This is comparable to retinal specialists whose average accuracy scores are 87.8% and 79.4% in two external test datasets. Conclusion: This study proved to be a benchmark method to improve the performance and enhance the interpretability of the traditional DL model with the implementation of segmentation based on prior human knowledge. It may have the potential to provide precise guidance for ERM diagnosis and treatment.
Collapse
|
6
|
Hsia Y, Lin YY, Wang BS, Su CY, Lai YH, Hsieh YT. Prediction of Visual Impairment in Epiretinal Membrane and Feature Analysis: A Deep Learning Approach Using Optical Coherence Tomography. Asia Pac J Ophthalmol (Phila) 2023; 12:21-28. [PMID: 36706331 DOI: 10.1097/apo.0000000000000576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 09/14/2022] [Indexed: 01/28/2023] Open
Abstract
PURPOSE The aim was to develop a deep learning model for predicting the extent of visual impairment in epiretinal membrane (ERM) using optical coherence tomography (OCT) images, and to analyze the associated features. METHODS Six hundred macular OCT images from eyes with ERM and no visually significant media opacity or other retinal diseases were obtained. Those with best-corrected visual acuity ≤20/50 were classified as "profound visual impairment," while those with best-corrected visual acuity >20/50 were classified as "less visual impairment." Ninety percent of images were used as the training data set and 10% were used for testing. Two convolutional neural network models (ResNet-50 and ResNet-18) were adopted for training. The t-distributed stochastic neighbor-embedding approach was used to compare their performances. The Grad-CAM technique was used in the heat map generative phase for feature analysis. RESULTS During the model development, the training accuracy was 100% in both convolutional neural network models, while the testing accuracy was 70% and 80% for ResNet-18 and ResNet-50, respectively. The t-distributed stochastic neighbor-embedding approach found that the deeper structure (ResNet-50) had better discrimination on OCT characteristics for visual impairment than the shallower structure (ResNet-18). The heat maps indicated that the key features for visual impairment were located mostly in the inner retinal layers of the fovea and parafoveal regions. CONCLUSIONS Deep learning algorithms could assess the extent of visual impairment from OCT images in patients with ERM. Changes in inner retinal layers were found to have a greater impact on visual acuity than the outer retinal changes.
Collapse
Affiliation(s)
- Yun Hsia
- National Taiwan University Biomedical Park Hospital, Hsin-Chu
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Yu-Yi Lin
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Bo-Sin Wang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chung-Yen Su
- Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Medical Device Innovation & Translation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
7
|
Tang Y, Gao X, Wang W, Dan Y, Zhou L, Su S, Wu J, Lv H, He Y. Automated Detection of Epiretinal Membranes in OCT Images Using Deep Learning. Ophthalmic Res 2022; 66:238-246. [PMID: 36170844 DOI: 10.1159/000525929] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Development and validation of a deep learning algorithm to automatically identify and locate epiretinal memberane (ERM) regions in OCT images. METHODS OCT images of 468 eyes were retrospectively collected from a total of 404 ERM patients. One expert manually annotated the ERM regions for all images. A total of 422 images (90%) and the remainig 46 images (10%) were used as the training dataset and validation dataset for deep learning algorithm training and validation, respectively. One senior and one junior clinician read the images. The diagnostic results were compared. RESULTS The algorithm accurately segmented and located the ERM regions in OCT images. The image-level accuracy was 95.65%, and the ERM region-level accuracy was 90.14%, respectively. In comparison experiments, the accuracies of the junior clinician improved from 85.00% to 61.29% without the assistance of the algorithm to 100.00% and 90.32% with the assistance of the algorithm. The corresponding results of the senior clinician were 96.15%, 95.00% without the assistance of the algorithm, and 96.15%, 97.50% with the assistance of the algorithm. CONCLUSIONS The developed deep learning algorithm can accurately segment ERM regions in OCT images. This deep learning approach may help clinicians in clinical diagnosis with better accuracy and efficiency.
Collapse
Affiliation(s)
- Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaorong Gao
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yujiao Dan
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Hongbin Lv
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Yue He
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
8
|
Liu Y, Li J, Ji H, Zhuang J. Comparisons of Glutamate in the Brains of Alzheimer’s Disease Mice Under Chemical Exchange Saturation Transfer Imaging Based on Machine Learning Analysis. Front Neurosci 2022; 16:838157. [PMID: 35592256 PMCID: PMC9112835 DOI: 10.3389/fnins.2022.838157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/14/2022] [Indexed: 11/17/2022] Open
Abstract
Chemical exchange saturation transfer (CEST) is one of the molecular magnetic resonance imaging (MRI) techniques that indirectly measures low-concentration metabolite or free protein signals that are difficult to detect by conventional MRI techniques. We applied CEST to Alzheimer’s disease (AD) and analyzed both region of interest (ROI) and pixel dimensions. Through the analysis of the ROI dimension, we found that the content of glutamate in the brains of AD mice was higher than that of normal mice of the same age. In the pixel-dimensional analysis, we obtained a map of the distribution of glutamate in the mouse brain. According to the experimental data of this study, we designed an algorithm framework based on data migration and used Resnet neural network to classify the glutamate distribution images of AD mice, with an accuracy rate of 75.6%. We evaluate the possibility of glutamate imaging as a biomarker for AD detection for the first time, with important implications for the detection and treatment of AD.
Collapse
Affiliation(s)
- Yixuan Liu
- Shanghai Yangzhi Rehabilitation Hospital Shanghai Sunshine Rehabilitation Center, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Jie Li
- Shanghai Yangzhi Rehabilitation Hospital Shanghai Sunshine Rehabilitation Center, College of Electronics and Information Engineering, Tongji University, Shanghai, China
- *Correspondence: Jie Li,
| | - Hongfei Ji
- Shanghai Yangzhi Rehabilitation Hospital Shanghai Sunshine Rehabilitation Center, College of Electronics and Information Engineering, Tongji University, Shanghai, China
- Hongfei Ji, ; orcid.org/0000-0002-2759-7084
| | - Jie Zhuang
- School of Psychology, Shanghai University of Sport, Shanghai, China
- Jie Zhuang, ; orcid.org/0000-0002-3316-5536
| |
Collapse
|
9
|
End-to-End Multi-Task Learning Approaches for the Joint Epiretinal Membrane Segmentation and Screening in OCT Images. Comput Med Imaging Graph 2022; 98:102068. [DOI: 10.1016/j.compmedimag.2022.102068] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 03/28/2022] [Accepted: 04/18/2022] [Indexed: 02/07/2023]
|
10
|
Kumar H, Goh KL, Guymer RH, Wu Z. A clinical perspective on the expanding role of artificial intelligence in age-related macular degeneration. Clin Exp Optom 2022; 105:674-679. [PMID: 35073498 DOI: 10.1080/08164622.2021.2022961] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
In recent years, there has been intense development of artificial intelligence (AI) techniques, which have the potential to improve the clinical management of age-related macular degeneration (AMD) and facilitate the prevention of irreversible vision loss from this condition. Such AI techniques could be used as clinical decision support tools to: (i) improve the detection of AMD by community eye health practitioners, (ii) enhance risk stratification to enable personalised monitoring strategies for those with the early stages of AMD, and (iii) enable early detection of signs indicative of possible choroidal neovascularisation allowing triaging of patients requiring urgent review. This review discusses the latest developments in AI techniques that show promise for these tasks, as well as how they may help in the management of patients being treated for choroidal neovascularisation and in accelerating the discovery of new treatments in AMD.
Collapse
Affiliation(s)
- Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| |
Collapse
|
11
|
Labkovich M, Paul M, Kim E, A. Serafini R, Lakhtakia S, Valliani AA, Warburton AJ, Patel A, Zhou D, Sklar B, Chelnis J, Elahi E. Portable hardware & software technologies for addressing ophthalmic health disparities: A systematic review. Digit Health 2022; 8:20552076221090042. [PMID: 35558637 PMCID: PMC9087242 DOI: 10.1177/20552076221090042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 03/09/2022] [Indexed: 11/19/2022] Open
Abstract
Vision impairment continues to be a major global problem, as the WHO estimates 2.2 billion people struggling with vision loss or blindness. One billion of these cases, however, can be prevented by expanding diagnostic capabilities. Direct global healthcare costs associated with these conditions totaled $255 billion in 2010, with a rapid upward projection to $294 billion in 2020. Accordingly, WHO proposed 2030 targets to enhance integration and patient-centered vision care by expanding refractive error and cataract worldwide coverage. Due to the limitations in cost and portability of adapted vision screening models, there is a clear need for new, more accessible vision testing tools in vision care. This comparative, systematic review highlights the need for new ophthalmic equipment and approaches while looking at existing and emerging technologies that could expand the capacity for disease identification and access to diagnostic tools. Specifically, the review focuses on portable hardware- and software-centered strategies that can be deployed in remote locations for detection of ophthalmic conditions and refractive error. Advancements in portable hardware, automated software screening tools, and big data-centric analytics, including machine learning, may provide an avenue for improving ophthalmic healthcare.
Collapse
Affiliation(s)
- Margarita Labkovich
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Megan Paul
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Eliott Kim
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Randal A. Serafini
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
- Nash Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | | | - Aly A Valliani
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Andrew J Warburton
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Aashay Patel
- Department of Medical Education, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Davis Zhou
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount
Sinai, New York, NY, USA
| | - Bonnie Sklar
- Department of Ophthalmology, Wills Eye Hospital, Philadelphia, PA, USA
| | - James Chelnis
- Department of Ophthalmology, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| | - Ebrahim Elahi
- Department of Ophthalmology, Icahn School of Medicine at Mount
Sinai, New York, NY, USA
| |
Collapse
|
12
|
Tabuchi H. Understanding required to consider artificial intelligence applications to the field of ophthalmology. Taiwan J Ophthalmol 2022. [PMID: 37484612 PMCID: PMC10361427 DOI: 10.4103/2211-5056.356685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
13
|
Tabuchi H, Nagasato D, Masumoto H, Tanabe M, Ishitobi N, Ochi H, Shimizu Y, Kiuchi Y. Developing an iOS application that uses machine learning for the automated diagnosis of blepharoptosis. Graefes Arch Clin Exp Ophthalmol 2021; 260:1329-1335. [PMID: 34734349 DOI: 10.1007/s00417-021-05475-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 10/15/2021] [Accepted: 10/21/2021] [Indexed: 11/27/2022] Open
Abstract
PURPOSE To assess the performance of artificial intelligence in the automated classification of images taken with a tablet device of patients with blepharoptosis and subjects with normal eyelid. METHODS This is a prospective and observational study. A total of 1276 eyelid images (624 images from 347 blepharoptosis cases and 652 images from 367 normal controls) from 606 participants were analyzed. In order to obtain a sufficient number of images for analysis, 1 to 4 eyelid images were obtained from each participant. We developed a model by fully retraining the pre-trained MobileNetV2 convolutional neural network. Subsequently, we verified whether the automatic diagnosis of blepharoptosis was possible using the images. In addition, we visualized how the model captured the features of the test data with Score-CAM. k-fold cross-validation (k = 5) was adopted for splitting the training and validation. Sensitivity, specificity, and the area under the curve (AUC) of the receiver operating characteristic curve for detecting blepharoptosis were examined. RESULTS We found the model had a sensitivity of 83.0% (95% confidence interval [CI], 79.8-85.9) and a specificity of 82.5% (95% CI, 79.4-85.4). The accuracy of the validation data was 82.8%, and the AUC was 0.900 (95% CI, 0.882-0.917). CONCLUSION Artificial intelligence was able to classify with high accuracy images of blepharoptosis and normal eyelids taken using a tablet device. Thus, the diagnosis of blepharoptosis with a tablet device is possible at a high level of accuracy. TRIAL REGISTRATION Date of registration: 2021-06-25. TRIAL REGISTRATION NUMBER UMIN000044660. Registration site: https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr_view.cgi?recptno=R000051004.
Collapse
Affiliation(s)
- Hitoshi Tabuchi
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan.,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Daisuke Nagasato
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan. .,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.
| | - Hiroki Masumoto
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan.,Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Naofumi Ishitobi
- Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima, Japan
| | - Hiroki Ochi
- Department of Medicine, Hiroshima University, Hiroshima, Japan
| | - Yoshie Shimizu
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Yoshiaki Kiuchi
- Department of Ophthalmology and Visual Sciences, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
14
|
Prediction of postoperative visual acuity after vitrectomy for macular hole using deep learning-based artificial intelligence. Graefes Arch Clin Exp Ophthalmol 2021; 260:1113-1123. [PMID: 34636995 DOI: 10.1007/s00417-021-05427-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/20/2021] [Accepted: 09/19/2021] [Indexed: 10/20/2022] Open
Abstract
PURPOSE To create a model for prediction of postoperative visual acuity (VA) after vitrectomy for macular hole (MH) treatment using preoperative optical coherence tomography (OCT) images, using deep learning (DL)-based artificial intelligence. METHODS This was a retrospective single-center study. We evaluated 259 eyes that underwent vitrectomy for MHs. We divided the eyes into four groups, based on their 6-month postoperative Snellen VA values: (A) ≥ 20/20; (B) 20/25-20/32; (C) 20/32-20/63; and (D) ≤ 20/100. Training data were randomly selected, comprising 20 eyes in each group. Test data were also randomly selected, comprising 52 total eyes in the same proportions as those of each group in the total database. Preoperative OCT images with corresponding postoperative VA values were used to train the original DL network. The final prediction of postoperative VA was subjected to regression analysis based on inferences made with DL network output. We created a model for predicting postoperative VA from preoperative VA, MH size, and age using multivariate linear regression. Precision values were determined, and correlation coefficients between predicted and actual postoperative VA values were calculated in two models. RESULTS The DL and multivariate models had precision values of 46% and 40%, respectively. The predicted postoperative VA values on the basis of DL and on preoperative VA and MH size were correlated with actual postoperative VA at 6 months postoperatively (P < .0001 and P < .0001, r = .62 and r = .55, respectively). CONCLUSION Postoperative VA after MH treatment could be predicted via DL using preoperative OCT images with greater accuracy than multivariate linear regression using preoperative VA, MH size, and age.
Collapse
|
15
|
Shao E, Liu C, Wang L, Song D, Guo L, Yao X, Xiong J, Wang B, Hu Y. Artificial intelligence-based detection of epimacular membrane from color fundus photographs. Sci Rep 2021; 11:19291. [PMID: 34588493 PMCID: PMC8481557 DOI: 10.1038/s41598-021-98510-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 09/01/2021] [Indexed: 12/25/2022] Open
Abstract
Epiretinal membrane (ERM) is a common ophthalmological disorder of high prevalence. Its symptoms include metamorphopsia, blurred vision, and decreased visual acuity. Early diagnosis and timely treatment of ERM is crucial to preventing vision loss. Although optical coherence tomography (OCT) is regarded as a de facto standard for ERM diagnosis due to its intuitiveness and high sensitivity, ophthalmoscopic examination or fundus photographs still have the advantages of price and accessibility. Artificial intelligence (AI) has been widely applied in the health care industry for its robust and significant performance in detecting various diseases. In this study, we validated the use of a previously trained deep neural network based-AI model in ERM detection based on color fundus photographs. An independent test set of fundus photographs was labeled by a group of ophthalmologists according to their corresponding OCT images as the gold standard. Then the test set was interpreted by other ophthalmologists and AI model without knowing their OCT results. Compared with manual diagnosis based on fundus photographs alone, the AI model had comparable accuracy (AI model 77.08% vs. integrated manual diagnosis 75.69%, χ2 = 0.038, P = 0.845, McNemar’s test), higher sensitivity (75.90% vs. 63.86%, χ2 = 4.500, P = 0.034, McNemar’s test), under the cost of lower but reasonable specificity (78.69% vs. 91.80%, χ2 = 6.125, P = 0.013, McNemar’s test). Thus our AI model can serve as a possible alternative for manual diagnosis in ERM screening.
Collapse
Affiliation(s)
- Enhua Shao
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Congxin Liu
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Lei Wang
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Dan Song
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Libin Guo
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Xuan Yao
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Jianhao Xiong
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Yuntao Hu
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
16
|
Krittanawong C, Virk HUH, Kumar A, Aydar M, Wang Z, Stewart MP, Halperin JL. Machine learning and deep learning to predict mortality in patients with spontaneous coronary artery dissection. Sci Rep 2021; 11:8992. [PMID: 33903608 PMCID: PMC8076284 DOI: 10.1038/s41598-021-88172-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 03/23/2021] [Indexed: 12/30/2022] Open
Abstract
Machine learning (ML) and deep learning (DL) can successfully predict high prevalence events in very large databases (big data), but the value of this methodology for risk prediction in smaller cohorts with uncommon diseases and infrequent events is uncertain. The clinical course of spontaneous coronary artery dissection (SCAD) is variable, and no reliable methods are available to predict mortality. Based on the hypothesis that machine learning (ML) and deep learning (DL) techniques could enhance the identification of patients at risk, we applied a deep neural network to information available in electronic health records (EHR) to predict in-hospital mortality in patients with SCAD. We extracted patient data from the EHR of an extensive urban health system and applied several ML and DL models using candidate clinical variables potentially associated with mortality. We partitioned the data into training and evaluation sets with cross-validation. We estimated model performance based on the area under the receiver-operator characteristics curve (AUC) and balanced accuracy. As sensitivity analyses, we examined results limited to cases with complete clinical information available. We identified 375 SCAD patients of which mortality during the index hospitalization was 11.5%. The best-performing DL algorithm identified in-hospital mortality with AUC 0.98 (95% CI 0.97-0.99), compared to other ML models (P < 0.0001). For prediction of mortality using ML models in patients with SCAD, the AUC ranged from 0.50 with the random forest method (95% CI 0.41-0.58) to 0.95 with the AdaBoost model (95% CI 0.93-0.96), with intermediate performance using logistic regression, decision tree, support vector machine, K-nearest neighbors, and extreme gradient boosting methods. A deep neural network model was associated with higher predictive accuracy and discriminative power than logistic regression or ML models for identification of patients with ACS due to SCAD prone to early mortality.
Collapse
Affiliation(s)
- Chayakrit Krittanawong
- Section of Cardiology, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX, 77030, USA.
- Icahn School of Medicine at Mount Sinai, The the Zena and Michael A. Wiener Cardiovascular Institute, Mount Sinai Heart, New York, NY, USA.
| | - Hafeez Ul Hassan Virk
- Department of Cardiovascular Diseases, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Anirudh Kumar
- Heart and Vascular Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Mehmet Aydar
- Department of Computer Science, Kent State University, Kent, OH, USA
| | - Zhen Wang
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Policy and Research, Department of Health Sciences Research, Mayo Clinic, Rochester, MN, USA
| | - Matthew P Stewart
- The Institute of Applied and Computational Sciences, Harvard University, Boston, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Boston, MA, USA
| | - Jonathan L Halperin
- Icahn School of Medicine at Mount Sinai, The the Zena and Michael A. Wiener Cardiovascular Institute, Mount Sinai Heart, New York, NY, USA
| |
Collapse
|
17
|
Fung AT, Galvin J, Tran T. Epiretinal membrane: A review. Clin Exp Ophthalmol 2021; 49:289-308. [PMID: 33656784 DOI: 10.1111/ceo.13914] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 02/14/2021] [Accepted: 02/16/2021] [Indexed: 02/07/2023]
Abstract
The prevalence of epiretinal membrane (ERM) is 7% to 11.8%, with increasing age being the most important risk factor. Although most ERM is idiopathic, common secondary causes include cataract surgery, retinal vascular disease, uveitis and retinal tears. The myofibroblastic pre-retinal cells are thought to transdifferentiate from glial and retinal pigment epithelial cells that reach the retinal surface via defects in the internal limiting membrane (ILM) or from the vitreous cavity. Grading schemes have evolved from clinical signs to ocular coherence tomography (OCT) based classification with associated features such as the cotton ball sign. Features predictive of better prognosis include absence of ectopic inner foveal layers, cystoid macular oedema, acquired vitelliform lesions and ellipsoid and cone outer segment termination defects. OCT-angiography shows reduced size of the foveal avascular zone. Vitrectomy with membrane peeling remains the mainstay of treatment for symptomatic ERMs. Additional ILM peeling reduces recurrence but is associated with anatomical changes including inner retinal dimpling.
Collapse
Affiliation(s)
- Adrian T Fung
- Westmead Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia.,Save Sight Institute, Central Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia.,Department of Ophthalmology, Faculty of Medicine, Health and Human Sciences, Macquarie University Hospital, Sydney, New South Wales, Australia
| | - Justin Galvin
- St. Vincent's Hospital, Melbourne, Victoria, Australia
| | - Tuan Tran
- Save Sight Institute, Central Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
18
|
Imamura H, Tabuchi H, Nagasato D, Masumoto H, Baba H, Furukawa H, Maruoka S. Automatic screening of tear meniscus from lacrimal duct obstructions using anterior segment optical coherence tomography images by deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:1569-1577. [PMID: 33576859 DOI: 10.1007/s00417-021-05078-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/23/2020] [Accepted: 01/06/2021] [Indexed: 10/22/2022] Open
Abstract
PURPOSE We assessed the ability of deep learning (DL) models to distinguish between tear meniscus of lacrimal duct obstruction (LDO) patients and normal subjects using anterior segment optical coherence tomography (ASOCT) images. METHODS The study included 117 ASOCT images (19 men and 98 women; mean age, 66.6 ± 13.6 years) from 101 LDO patients and 113 ASOCT images (29 men and 84 women; mean age, 38.3 ± 19.9 years) from 71 normal subjects. We trained to construct 9 single and 502 ensemble DL models with 9 different network structures, and calculated the area under the curve (AUC), sensitivity, and specificity to compare the distinguishing abilities of these single and ensemble DL models. RESULTS For the highest single DL model (DenseNet169), the AUC, sensitivity, and specificity for distinguishing LDO were 0.778, 64.6%, and 72.1%, respectively. For the highest ensemble DL model (VGG16, ResNet50, DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, and Xception), the AUC, sensitivity, and specificity for distinguishing LDO were 0.824, 84.8%, and 58.8%, respectively. The heat maps indicated that these DL models placed their focus on the tear meniscus region of the ASOCT images. CONCLUSION The combination of DL and ASOCT images could distinguish between tear meniscus of LDO patients and normal subjects with a high level of accuracy. These results suggest that DL might be useful for automatic screening of patients for LDO.
Collapse
Affiliation(s)
- Hitoshi Imamura
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan. .,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan.
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroaki Baba
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hiroki Furukawa
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Sachiko Maruoka
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| |
Collapse
|
19
|
Prediction of age and brachial-ankle pulse-wave velocity using ultra-wide-field pseudo-color images by deep learning. Sci Rep 2020; 10:19369. [PMID: 33168888 PMCID: PMC7652944 DOI: 10.1038/s41598-020-76513-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/29/2020] [Indexed: 12/22/2022] Open
Abstract
This study examined whether age and brachial-ankle pulse-wave velocity (baPWV) can be predicted with ultra-wide-field pseudo-color (UWPC) images using deep learning (DL). We examined 170 UWPC images of both eyes of 85 participants (40 men and 45 women, mean age: 57.5 ± 20.9 years). Three types of images were included (total, central, and peripheral) and analyzed by k-fold cross-validation (k = 5) using Visual Geometry Group-16. After bias was eliminated using the generalized linear mixed model, the standard regression coefficients (SRCs) between actual age and baPWV and predicted age and baPWV from the UWPC images by the neural network were calculated, and the prediction accuracies of the DL model for age and baPWV were examined. The SRC between actual age and predicted age by the neural network was 0.833 for all images, 0.818 for central images, and 0.649 for peripheral images (all P < 0.001) and between the actual baPWV and the predicted baPWV was 0.390 for total images, 0.419 for central images, and 0.312 for peripheral images (all P < 0.001). These results show the potential prediction capability of DL for age and vascular aging and could be useful for disease prevention and early treatment.
Collapse
|
20
|
Epiretinal Membrane Detection at the Ophthalmologist Level using Deep Learning of Optical Coherence Tomography. Sci Rep 2020; 10:8424. [PMID: 32439844 PMCID: PMC7242423 DOI: 10.1038/s41598-020-65405-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 05/04/2020] [Indexed: 12/23/2022] Open
Abstract
Purpose: Previous deep learning studies on optical coherence tomography (OCT) mainly focused on diabetic retinopathy and age-related macular degeneration. We proposed a deep learning model that can identify epiretinal membrane (ERM) in OCT with ophthalmologist-level performance. Design: Cross-sectional study. Participants: A total of 3,618 central fovea cross section OCT images from 1,475 eyes of 964 patients. Methods: We retrospectively collected 7,652 OCT images from 1,197 patients. From these images, 2,171 were normal and 1,447 were ERM OCT. A total of 3,141 OCT images was used as training dataset and 477 images as testing dataset. DL algorithm was used to train the interpretation model. Diagnostic results by four board-certified non-retinal specialized ophthalmologists on the testing dataset were compared with those generated by the DL model. Main Outcome Measures: We calculated for the derived DL model the following characteristics: sensitivity, specificity, F1 score and area under curve (AUC) of the receiver operating characteristic (ROC) curve. These were calculated according to the gold standard results which were parallel diagnoses of the retinal specialist. Performance of the DL model was finally compared with that of non-retinal specialized ophthalmologists. Results: Regarding the diagnosis of ERM in OCT images, the trained DL model had the following characteristics in performance: sensitivity: 98.7%, specificity: 98.0%, and F1 score: 0.945. The accuracy on the training dataset was 99.7% (95% CI: 99.4 - 99.9%), and for the testing dataset, diagnostic accuracy was 98.1% (95% CI: 96.5 - 99.1%). AUC of the ROC curve was 0.999. The DL model slightly outperformed the average non-retinal specialized ophthalmologists. Conclusions: An ophthalmologist-level DL model was built here to accurately identify ERM in OCT images. The performance of the model was slightly better than the average non-retinal specialized ophthalmologists. The derived model may play a role to assist clinicians to promote the efficiency and safety of healthcare in the future.
Collapse
|
21
|
Sogawa T, Tabuchi H, Nagasato D, Masumoto H, Ikuno Y, Ohsugi H, Ishitobi N, Mitamura Y. Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography. PLoS One 2020; 15:e0227240. [PMID: 32298265 PMCID: PMC7161961 DOI: 10.1371/journal.pone.0227240] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/29/2020] [Indexed: 12/20/2022] Open
Abstract
This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined. The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.
Collapse
Affiliation(s)
- Takahiro Sogawa
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | | | | | | | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
22
|
Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea 2020; 39:720-725. [PMID: 32040007 DOI: 10.1097/ico.0000000000002279] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
23
|
Severity Classification of Conjunctival Hyperaemia by Deep Neural Network Ensembles. J Ophthalmol 2019; 2019:7820971. [PMID: 31275636 PMCID: PMC6589312 DOI: 10.1155/2019/7820971] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 05/10/2019] [Indexed: 12/23/2022] Open
Abstract
Conjunctival hyperaemia is a common clinical ophthalmological finding and can be a symptom of various ocular disorders. Although several severity classification criteria have been proposed, none include objective severity criteria. Neural networks and deep learning have been utilised in ophthalmology, but not for the purpose of classifying the severity of conjunctival hyperaemia objectively. To develop a conjunctival hyperaemia grading software, we used 3700 images as the training data and 923 images as the validation test data. We trained the nine neural network models and validated the performance of these networks. We finally chose the best combination of these networks. The DenseNet201 model was the best individual model. The combination of the DenseNet201, DenseNet121, VGG19, and ResNet50 were the best model. The correlation between the multimodel responses, and the vessel-area occupied was 0.737 (p < 0.01). This system could be as accurate and comprehensive as specialists but would be significantly faster and consistent with objective values.
Collapse
|