1
|
Heidari Z, Hashemi H, Sotude D, Ebrahimi-Besheli K, Khabazkhoob M, Soleimani M, Djalilian AR, Yousefi S. Applications of Artificial Intelligence in Diagnosis of Dry Eye Disease: A Systematic Review and Meta-Analysis. Cornea 2024; 43:1310-1318. [PMID: 38984532 DOI: 10.1097/ico.0000000000003626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 06/06/2024] [Indexed: 07/11/2024]
Abstract
PURPOSE Clinical diagnosis of dry eye disease is based on a subjective Ocular Surface Disease Index questionnaire or various objective tests, however, these diagnostic methods have several limitations. METHODS We conducted a comprehensive review of articles discussing various applications of artificial intelligence (AI) models in the diagnosis of the dry eye disease by searching PubMed, Web of Science, Scopus, and Google Scholar databases up to December 2022. We initially extracted 2838 articles, and after removing duplicates and applying inclusion and exclusion criteria based on title and abstract, we selected 47 eligible full-text articles. We ultimately selected 17 articles for the meta-analysis after applying inclusion and exclusion criteria on the full-text articles. We used the Standards for Reporting of Diagnostic Accuracy Studies to evaluate the quality of the methodologies used in the included studies. The performance criteria for measuring the effectiveness of AI models included area under the receiver operating characteristic curve, sensitivity, specificity, and accuracy. We calculated the pooled estimate of accuracy using the random-effects model. RESULTS The meta-analysis showed that pooled estimate of accuracy was 91.91% (95% confidence interval: 87.46-95.49) for all studies. The mean (±SD) of area under the receiver operating characteristic curve, sensitivity, and specificity were 94.1 (±5.14), 89.58 (±6.13), and 92.62 (±6.61), respectively. CONCLUSIONS This study revealed that AI models are more accurate in diagnosing dry eye disease based on some imaging modalities and suggested that AI models are promising in augmenting dry eye clinics to assist physicians in diagnosis of this ocular surface condition.
Collapse
Affiliation(s)
- Zahra Heidari
- Psychiatry and Behavioral Sciences Research Center, Mazandaran University of Medical Sciences, Sari, Iran
- Functional Neurosurgery Research Center, Shohada Tajrish Comprehensive Neurosurgical Center of Excellence, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hassan Hashemi
- Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran, Iran
| | - Danial Sotude
- Psychiatry and Behavioral Sciences Research Center, Mazandaran University of Medical Sciences, Sari, Iran
| | - Kiana Ebrahimi-Besheli
- Cellular and Molecular Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Khabazkhoob
- Department of Medical Surgical Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Soleimani
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Ali R Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN; and
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN
| |
Collapse
|
2
|
Fan J, Yang T, Wang H, Zhang H, Zhang W, Ji M, Miao J. A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01270-z. [PMID: 39299958 DOI: 10.1007/s10278-024-01270-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 09/09/2024] [Accepted: 09/10/2024] [Indexed: 09/22/2024]
Abstract
Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.
Collapse
Affiliation(s)
- Jiacheng Fan
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, China.
- Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, China.
- Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), 100 Lianhua Street, High-Tech Zone, Zhengzhou, 450001, Henan, China.
| | - Heng Wang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Huiyao Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Wenjie Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Mingzhu Ji
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Jianyu Miao
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, China
| |
Collapse
|
3
|
Feng X, Xu K, Luo MJ, Chen H, Yang Y, He Q, Song C, Li R, Wu Y, Wang H, Tham YC, Ting DSW, Lin H, Wong TY, Lam DSC. Latest developments of generative artificial intelligence and applications in ophthalmology. Asia Pac J Ophthalmol (Phila) 2024; 13:100090. [PMID: 39128549 DOI: 10.1016/j.apjo.2024.100090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/13/2024] Open
Abstract
The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
Collapse
Affiliation(s)
- Xiaoru Feng
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China; Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kezheng Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Ming-Jie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haichao Chen
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yangfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qi He
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenxin Song
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Ruiyao Li
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - You Wu
- Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China; School of Basic Medical Sciences, Tsinghua Medicine, Tsinghua University, Beijing, China; Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA.
| | - Haibo Wang
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
| | - Tien Yin Wong
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dennis Shun-Chiu Lam
- The International Eye Research Institute, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER International Eye Care Group, Hong Kong, Hong Kong, China
| |
Collapse
|
4
|
Tiosano L, Abutbul R, Lender R, Shwartz Y, Chowers I, Hoshen Y, Levy J. Anomaly Detection and Biomarkers Localization in Retinal Images. J Clin Med 2024; 13:3093. [PMID: 38892804 PMCID: PMC11173078 DOI: 10.3390/jcm13113093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
Background: To design a novel anomaly detection and localization approach using artificial intelligence methods using optical coherence tomography (OCT) scans for retinal diseases. Methods: High-resolution OCT scans from the publicly available Kaggle dataset and a local dataset were used by four state-of-the-art self-supervised frameworks. The backbone model of all the frameworks was a pre-trained convolutional neural network (CNN), which enabled the extraction of meaningful features from OCT images. Anomalous images included choroidal neovascularization (CNV), diabetic macular edema (DME), and the presence of drusen. Anomaly detectors were evaluated by commonly accepted performance metrics, including area under the receiver operating characteristic curve, F1 score, and accuracy. Results: A total of 25,315 high-resolution retinal OCT slabs were used for training. Test and validation sets consisted of 968 and 4000 slabs, respectively. The best performing across all anomaly detectors had an area under the receiver operating characteristic of 0.99. All frameworks were shown to achieve high performance and generalize well for the different retinal diseases. Heat maps were generated to visualize the quality of the frameworks' ability to localize anomalous areas of the image. Conclusions: This study shows that with the use of pre-trained feature extractors, the frameworks tested can generalize to the domain of retinal OCT scans and achieve high image-level ROC-AUC scores. The localization results of these frameworks are promising and successfully capture areas that indicate the presence of retinal pathology. Moreover, such frameworks have the potential to uncover new biomarkers that are difficult for the human eye to detect. Frameworks for anomaly detection and localization can potentially be integrated into clinical decision support and automatic screening systems that will aid ophthalmologists in patient diagnosis, follow-up, and treatment design. This work establishes a solid basis for further development of automated anomaly detection frameworks for clinical use.
Collapse
Affiliation(s)
- Liran Tiosano
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Ron Abutbul
- School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem 9574409, Israel
| | - Rivkah Lender
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Yahel Shwartz
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Itay Chowers
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Yedid Hoshen
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| | - Jaime Levy
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Hadassah School of Medicine, Hebrew University, Jerusalem 9574409, Israel (J.L.)
| |
Collapse
|
5
|
Hojjati H, Ho TKK, Armanfard N. Self-supervised anomaly detection in computer vision and beyond: A survey and outlook. Neural Netw 2024; 172:106106. [PMID: 38232432 DOI: 10.1016/j.neunet.2024.106106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 12/31/2023] [Accepted: 01/05/2024] [Indexed: 01/19/2024]
Abstract
Anomaly detection (AD) plays a crucial role in various domains, including cybersecurity, finance, and healthcare, by identifying patterns or events that deviate from normal behavior. In recent years, significant progress has been made in this field due to the remarkable growth of deep learning models. Notably, the advent of self-supervised learning has sparked the development of novel AD algorithms that outperform the existing state-of-the-art approaches by a considerable margin. This paper aims to provide a comprehensive review of the current methodologies in self-supervised anomaly detection. We present technical details of the standard methods and discuss their strengths and drawbacks. We also compare the performance of these models against each other and other state-of-the-art anomaly detection models. Finally, the paper concludes with a discussion of future directions for self-supervised anomaly detection, including the development of more effective and efficient algorithms and the integration of these techniques with other related fields, such as multi-modal learning.
Collapse
Affiliation(s)
- Hadi Hojjati
- Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada.
| | - Thi Kieu Khanh Ho
- Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada
| | - Narges Armanfard
- Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada
| |
Collapse
|
6
|
Yan Y, Huang X, Jiang X, Gao Z, Liu X, Jin K, Ye J. Clinical evaluation of deep learning systems for assisting in the diagnosis of the epiretinal membrane grade in general ophthalmologists. Eye (Lond) 2024; 38:730-736. [PMID: 37848677 PMCID: PMC10920879 DOI: 10.1038/s41433-023-02765-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 08/28/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Epiretinal membrane (ERM) is a common age-related retinal disease detected by optical coherence tomography (OCT), with a prevalence of 34.1% among people over 60 years old. This study aims to develop artificial intelligence (AI) systems to assist in the diagnosis of ERM grade using OCT images and to clinically evaluate the potential benefits and risks of our AI systems with a comparative experiment. METHODS A segmentation deep learning (DL) model that segments retinal features associated with ERM severity and a classification DL model that grades the severity of ERM were developed based on an OCT dataset obtained from three hospitals. A comparative experiment was conducted to compare the performance of four general ophthalmologists with and without assistance from the AI in diagnosing ERM severity. RESULTS The segmentation network had a pixel accuracy (PA) of 0.980 and a mean intersection over union (MIoU) of 0.873, while the six-classification network had a total accuracy of 81.3%. The diagnostic accuracy scores of the four ophthalmologists increased with AI assistance from 81.7%, 80.7%, 78.0%, and 80.7% to 87.7%, 86.7%, 89.0%, and 91.3%, respectively, while the corresponding time expenditures were reduced. The specific results of the study as well as the misinterpretations of the AI systems were analysed. CONCLUSION Through our comparative experiment, the AI systems proved to be valuable references for medical diagnosis and demonstrated the potential to accelerate clinical workflows. Systematic efforts are needed to ensure the safe and rapid integration of AI systems into ophthalmic practice.
Collapse
Affiliation(s)
- Yan Yan
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoling Huang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xindi Liu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| |
Collapse
|
7
|
Dow ER, Khan NC, Chen KM, Mishra K, Perera C, Narala R, Basina M, Dang J, Kim M, Levine M, Phadke A, Tan M, Weng K, Do DV, Moshfeghi DM, Mahajan VB, Mruthyunjaya P, Leng T, Myung D. AI-Human Hybrid Workflow Enhances Teleophthalmology for the Detection of Diabetic Retinopathy. OPHTHALMOLOGY SCIENCE 2023; 3:100330. [PMID: 37449051 PMCID: PMC10336195 DOI: 10.1016/j.xops.2023.100330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/04/2023] [Accepted: 05/08/2023] [Indexed: 07/18/2023]
Abstract
Objective Detection of diabetic retinopathy (DR) outside of specialized eye care settings is an important means of access to vision-preserving health maintenance. Remote interpretation of fundus photographs acquired in a primary care or other nonophthalmic setting in a store-and-forward manner is a predominant paradigm of teleophthalmology screening programs. Artificial intelligence (AI)-based image interpretation offers an alternative means of DR detection. IDx-DR (Digital Diagnostics Inc) is a Food and Drug Administration-authorized autonomous testing device for DR. We evaluated the diagnostic performance of IDx-DR compared with human-based teleophthalmology over 2 and a half years. Additionally, we evaluated an AI-human hybrid workflow that combines AI-system evaluation with human expert-based assessment for referable cases. Design Prospective cohort study and retrospective analysis. Participants Diabetic patients ≥ 18 years old without a prior DR diagnosis or DR examination in the past year presenting for routine DR screening in a primary care clinic. Methods Macula-centered and optic nerve-centered fundus photographs were evaluated by an AI algorithm followed by consensus-based overreading by retina specialists at the Stanford Ophthalmic Reading Center. Detection of more-than-mild diabetic retinopathy (MTMDR) was compared with in-person examination by a retina specialist. Main Outcome Measures Sensitivity, specificity, accuracy, positive predictive value, and gradability achieved by the AI algorithm and retina specialists. Results The AI algorithm had higher sensitivity (95.5% sensitivity; 95% confidence interval [CI], 86.7%-100%) but lower specificity (60.3% specificity; 95% CI, 47.7%-72.9%) for detection of MTMDR compared with remote image interpretation by retina specialists (69.5% sensitivity; 95% CI, 50.7%-88.3%; 96.9% specificity; 95% CI, 93.5%-100%). Gradability of encounters was also lower for the AI algorithm (62.5%) compared with retina specialists (93.1%). A 2-step AI-human hybrid workflow in which the AI algorithm initially rendered an assessment followed by overread by a retina specialist of MTMDR-positive encounters resulted in a sensitivity of 95.5% (95% CI, 86.7%-100%) and a specificity of 98.2% (95% CI, 94.6%-100%). Similarly, a 2-step overread by retina specialists of AI-ungradable encounters improved gradability from 63.5% to 95.6% of encounters. Conclusions Implementation of an AI-human hybrid teleophthalmology workflow may both decrease reliance on human specialist effort and improve diagnostic accuracy. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Eliot R. Dow
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Nergis C. Khan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Kapil Mishra
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Ramsudha Narala
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Marina Basina
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Jimmy Dang
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Michael Kim
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marcie Levine
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Anuradha Phadke
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marilyn Tan
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Kirsti Weng
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Diana V. Do
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Darius M. Moshfeghi
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - David Myung
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| |
Collapse
|
8
|
Wang M, Lin T, Wang L, Lin A, Zou K, Xu X, Zhou Y, Peng Y, Meng Q, Qian Y, Deng G, Wu Z, Chen J, Lin J, Zhang M, Zhu W, Zhang C, Zhang D, Goh RSM, Liu Y, Pang CP, Chen X, Chen H, Fu H. Uncertainty-inspired open set learning for retinal anomaly identification. Nat Commun 2023; 14:6757. [PMID: 37875484 PMCID: PMC10598011 DOI: 10.1038/s41467-023-42444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/11/2023] [Indexed: 10/26/2023] Open
Abstract
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.
Collapse
Affiliation(s)
- Meng Wang
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Lianyu Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Aidi Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Ke Zou
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yi Zhou
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yuanyuan Peng
- School of Biomedical Engineering, Anhui Medical University, 230032, Hefei, Anhui, China
| | - Qingquan Meng
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yiming Qian
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Guoyao Deng
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Zhiqun Wu
- Longchuan People's Hospital, 517300, Heyuan, Guangdong, China
| | - Junhong Chen
- Puning People's Hospital, 515300, Jieyang, Guangdong, China
| | - Jianhong Lin
- Haifeng PengPai Memory Hospital, 516400, Shanwei, Guangdong, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Weifang Zhu
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, 300350, Tianjin, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Rick Siow Mong Goh
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Chi Pui Pang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China.
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215006, Suzhou, China.
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore.
| |
Collapse
|
9
|
Zhou Y, Chia MA, Wagner SK, Ayhan MS, Williamson DJ, Struyven RR, Liu T, Xu M, Lozano MG, Woodward-Court P, Kihara Y, Altmann A, Lee AY, Topol EJ, Denniston AK, Alexander DC, Keane PA. A foundation model for generalizable disease detection from retinal images. Nature 2023; 622:156-163. [PMID: 37704728 PMCID: PMC10550819 DOI: 10.1038/s41586-023-06555-x] [Citation(s) in RCA: 102] [Impact Index Per Article: 102.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 08/18/2023] [Indexed: 09/15/2023]
Abstract
Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK.
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Mark A Chia
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Murat S Ayhan
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Dominic J Williamson
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Robbert R Struyven
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Timing Liu
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Mateo G Lozano
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Computer Science, University of Coruña, A Coruña, Spain
| | - Peter Woodward-Court
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Yuka Kihara
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Andre Altmann
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Eric J Topol
- Department of Molecular Medicine, Scripps Research, La Jolla, CA, USA
| | - Alastair K Denniston
- Academic Unit of Ophthalmology, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Institute of Ophthalmology, University College London, London, UK.
| |
Collapse
|
10
|
Paul W, Burlina P, Mocharla R, Joshi N, Li Z, Gu S, Nanegrungsunk O, Lin K, Bressler SB, Cai CX, Kong J, Liu TYA, Moini H, Du W, Amer F, Chu K, Vitti R, Sepehrband F, Bressler NM. Accuracy of Artificial Intelligence in Estimating Best-Corrected Visual Acuity From Fundus Photographs in Eyes With Diabetic Macular Edema. JAMA Ophthalmol 2023; 141:677-685. [PMID: 37289463 PMCID: PMC10251243 DOI: 10.1001/jamaophthalmol.2023.2271] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/17/2023] [Indexed: 06/09/2023]
Abstract
Importance Best-corrected visual acuity (BCVA) is a measure used to manage diabetic macular edema (DME), sometimes suggesting development of DME or consideration of initiating, repeating, withholding, or resuming treatment with anti-vascular endothelial growth factor. Using artificial intelligence (AI) to estimate BCVA from fundus images could help clinicians manage DME by reducing the personnel needed for refraction, the time presently required for assessing BCVA, or even the number of office visits if imaged remotely. Objective To evaluate the potential application of AI techniques for estimating BCVA from fundus photographs with and without ancillary information. Design, Setting, and Participants Deidentified color fundus images taken after dilation were used post hoc to train AI systems to perform regression from image to BCVA and to evaluate resultant estimation errors. Participants were patients enrolled in the VISTA randomized clinical trial through 148 weeks wherein the study eye was treated with aflibercept or laser. The data from study participants included macular images, clinical information, and BCVA scores by trained examiners following protocol refraction and VA measurement on Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Main Outcomes Primary outcome was regression evaluated by mean absolute error (MAE); the secondary outcome included percentage of predictions within 10 letters, computed over the entire cohort as well as over subsets categorized by baseline BCVA, determined from baseline through the 148-week visit. Results Analysis included 7185 macular color fundus images of the study and fellow eyes from 459 participants. Overall, the mean (SD) age was 62.2 (9.8) years, and 250 (54.5%) were male. The baseline BCVA score for the study eyes ranged from 73 to 24 letters (approximate Snellen equivalent 20/40 to 20/320). Using ResNet50 architecture, the MAE for the testing set (n = 641 images) was 9.66 (95% CI, 9.05-10.28); 33% of the values (95% CI, 30%-37%) were within 0 to 5 letters and 28% (95% CI, 25%-32%) within 6 to 10 letters. For BCVA of 100 letters or less but more than 80 letters (20/10 to 20/25, n = 161) and 80 letters or less but more than 55 letters (20/32 to 20/80, n = 309), the MAE was 8.84 letters (95% CI, 7.88-9.81) and 7.91 letters (95% CI, 7.28-8.53), respectively. Conclusions and Relevance This investigation suggests AI can estimate BCVA directly from fundus photographs in patients with DME, without refraction or subjective visual acuity measurements, often within 1 to 2 lines on an ETDRS chart, supporting this AI concept if additional improvements in estimates can be achieved.
Collapse
Affiliation(s)
- William Paul
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Philippe Burlina
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
- Department of Computer Science and Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
- Zoox, Foster City, California
| | - Rohita Mocharla
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Neil Joshi
- Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland
| | - Zhuolin Li
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Sophie Gu
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York–Presbyterian Hospital, New York, New York
| | - Onnisa Nanegrungsunk
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Kira Lin
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Ruiz Department of Ophthalmology and Visual Science at McGovern Medical School at UTHealth Houston, Houston, Texas
| | - Susan B. Bressler
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Cindy X. Cai
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jun Kong
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - T. Y. Alvin Liu
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Hadi Moini
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Weiming Du
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Fouad Amer
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Karen Chu
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | - Robert Vitti
- Regeneron Pharmaceuticals Inc, Tarrytown, New York
| | | | - Neil M. Bressler
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Editor, JAMA Ophthalmology
| |
Collapse
|
11
|
Veturi YA, Woof W, Lazebnik T, Moghul I, Woodward-Court P, Wagner SK, Cabral de Guimarães TA, Daich Varela M, Liefers B, Patel PJ, Beck S, Webster AR, Mahroo O, Keane PA, Michaelides M, Balaskas K, Pontikos N. SynthEye: Investigating the Impact of Synthetic Data on Artificial Intelligence-assisted Gene Diagnosis of Inherited Retinal Disease. OPHTHALMOLOGY SCIENCE 2023; 3:100258. [PMID: 36685715 PMCID: PMC9852957 DOI: 10.1016/j.xops.2022.100258] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022]
Abstract
Purpose Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). Design Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. Participants Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. Methods A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. Main Outcome Measures We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ). Results An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). Conclusions Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references.
Collapse
Key Words
- AUROC, area under the receiver operating characteristic curve
- BRISQUE, Blind/Referenceless Image Spatial Quality Evaluator
- Class imbalance
- Clinical Decision-Support Model
- DL, deep learning
- Deep Learning
- FAF, fundas autofluorescence
- FRR, Fake Recognition Rate
- GAN, generative adversarial network
- Generative Adversarial Networks
- IRD, inherited retinal disease
- Inherited Retinal Diseases
- MEH, Moorfields Eye Hospital
- R, baseline model
- RB, rebalanced model
- S, synthetic data trained model
- Synthetic data
- TRR, True Recognition Rate
- UMAP, Universal Manifold Approximation and Projection
Collapse
Affiliation(s)
- Yoga Advaith Veturi
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - William Woof
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Teddy Lazebnik
- University College London Cancer Institute, University College London, London, UK
| | | | - Peter Woodward-Court
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Siegfried K. Wagner
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Malena Daich Varela
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | - Stephan Beck
- University College London Cancer Institute, University College London, London, UK
| | - Andrew R. Webster
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Omar Mahroo
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Pearse A. Keane
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Michel Michaelides
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Konstantinos Balaskas
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Nikolas Pontikos
- University College London Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| |
Collapse
|
12
|
Chorev M, Haderlein J, Chandra S, Menon G, Burton BJL, Pearce I, McKibbin M, Thottarath S, Karatsai E, Chandak S, Kotagiri A, Talks J, Grabowska A, Ghanchi F, Gale R, Hamilton R, Antony B, Garnavi R, Mareels I, Giani A, Chong V, Sivaprasad S. A Multi-Modal AI-Driven Cohort Selection Tool to Predict Suboptimal Non-Responders to Aflibercept Loading-Phase for Neovascular Age-Related Macular Degeneration: PRECISE Study Report 1. J Clin Med 2023; 12:jcm12083013. [PMID: 37109349 PMCID: PMC10142969 DOI: 10.3390/jcm12083013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 04/08/2023] [Accepted: 04/17/2023] [Indexed: 04/29/2023] Open
Abstract
Patients diagnosed with exudative neovascular age-related macular degeneration are commonly treated with anti-vascular endothelial growth factor (anti-VEGF) agents. However, response to treatment is heterogeneous, without a clinical explanation. Predicting suboptimal response at baseline will enable more efficient clinical trial designs for novel, future interventions and facilitate individualised therapies. In this multicentre study, we trained a multi-modal artificial intelligence (AI) system to identify suboptimal responders to the loading-phase of the anti-VEGF agent aflibercept from baseline characteristics. We collected clinical features and optical coherence tomography scans from 1720 eyes of 1612 patients between 2019 and 2021. We evaluated our AI system as a patient selection method by emulating hypothetical clinical trials of different sizes based on our test set. Our method detected up to 57.6% more suboptimal responders than random selection, and up to 24.2% more than any alternative selection criteria tested. Applying this method to the entry process of candidates into randomised controlled trials may contribute to the success of such trials and further inform personalised care.
Collapse
Affiliation(s)
- Michal Chorev
- Centre for Applied Research, IBM Australia, Southbank, VIC 3006, Australia
| | - Jonas Haderlein
- Centre for Applied Research, IBM Australia, Southbank, VIC 3006, Australia
| | - Shruti Chandra
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Geeta Menon
- Frimley Health NHS Foundation Trust, Surrey GU16 7UJ, UK
| | - Benjamin J L Burton
- Department of Ophthalmology, James Paget University Hospitals NHS Foundation Trust, Norfolk NR31 6LA, UK
| | - Ian Pearce
- Clinical Eye Research Centre, St. Paul's Eye Unit, The Royal Liverpool and Broadgreen University Hospitals NHS Foundation Trust, Liverpool L7 8YE, UK
| | | | - Sridevi Thottarath
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Eleni Karatsai
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Swati Chandak
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Ajay Kotagiri
- South Tyneside and Sunderland NHS Foundation Trust, Sunderland SR4 7TP, UK
| | - James Talks
- Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne NE1 4LP, UK
| | - Anna Grabowska
- King's College Hospital NHS Foundation Trust, London SE5 9RS, UK
| | - Faruque Ghanchi
- Bradford Teaching Hospitals NHS Foundation Trust, Bradford BD9 6RJ, UK
| | - Richard Gale
- York Teaching Hospital NHS Foundation Trust, York YO31 8HE, UK
| | - Robin Hamilton
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Bhavna Antony
- Centre for Applied Research, IBM Australia, Southbank, VIC 3006, Australia
| | - Rahil Garnavi
- Centre for Applied Research, IBM Australia, Southbank, VIC 3006, Australia
| | - Iven Mareels
- Centre for Applied Research, IBM Australia, Southbank, VIC 3006, Australia
| | - Andrea Giani
- Boehringer Ingelheim, 55218 Ingelheim am Rhein, Germany
| | - Victor Chong
- Institute of Ophthalmology, University College London, London NW3 2PF, UK
| | - Sobha Sivaprasad
- National Institute of Health Research, Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
- Institute of Ophthalmology, University College London, London NW3 2PF, UK
| |
Collapse
|
13
|
Nespolo RG, Yi D, Cole E, Wang D, Warren A, Leiderman YI. Feature Tracking and Segmentation in Real Time via Deep Learning in Vitreoretinal Surgery: A Platform for Artificial Intelligence-Mediated Surgical Guidance. Ophthalmol Retina 2023; 7:236-242. [PMID: 36241132 DOI: 10.1016/j.oret.2022.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/28/2022] [Accepted: 10/03/2022] [Indexed: 11/15/2022]
Abstract
PURPOSE This study investigated whether a deep-learning neural network can detect and segment surgical instrumentation and relevant tissue boundaries and landmarks within the retina using imaging acquired from a surgical microscope in real time, with the goal of providing image-guided vitreoretinal (VR) microsurgery. DESIGN Retrospective analysis via a prospective, single-center study. PARTICIPANTS One hundred and one patients undergoing VR surgery, inclusive of core vitrectomy, membrane peeling, and endolaser application, in a university-based ophthalmology department between July 1, 2020, and September 1, 2021. METHODS A dataset composed of 606 surgical image frames was annotated by 3 VR surgeons. Annotation consisted of identifying the location and area of the following features, when present in-frame: vitrector-, forceps-, and endolaser tooltips, optic disc, fovea, retinal tears, retinal detachment, fibrovascular proliferation, endolaser spots, area where endolaser was applied, and macular hole. An instance segmentation fully convolutional neural network (YOLACT++) was adapted and trained, and fivefold cross-validation was employed to generate metrics for accuracy. MAIN OUTCOME MEASURES Area under the precision-recall curve (AUPR) for the detection of elements tracked and segmented in the final test dataset; the frames per second (FPS) for the assessment of suitability for real-time performance of the model. RESULTS The platform detected and classified the vitrector tooltip with a mean AUPR of 0.972 ± 0.009. The segmentation of target tissues, such as the optic disc, fovea, and macular hole reached mean AUPR values of 0.928 ± 0.013, 0.844 ± 0.039, and 0.916 ± 0.021, respectively. The postprocessed image was rendered at a full high-definition resolution of 1920 × 1080 pixels at 38.77 ± 1.52 FPS when attached to a surgical visualization system, reaching up to 87.44 ± 3.8 FPS. CONCLUSIONS Neural Networks can localize, classify, and segment tissues and instruments during VR procedures in real time. We propose a framework for developing surgical guidance and assessment platform that may guide surgical decision-making and help in formulating tools for systematic analyses of VR surgery. Potential applications include collision avoidance to prevent unintended instrument-tissue interactions and the extraction of spatial localization and movement of surgical instruments for surgical data science research. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Rogerio Garcia Nespolo
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Daniel Wang
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Alexis Warren
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Yannek I Leiderman
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
14
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
15
|
Jeon M, Park H, Kim HJ, Morley M, Cho H. k-SALSA: k-anonymous synthetic averaging of retinal images via local style alignment. COMPUTER VISION - ECCV ... : ... EUROPEAN CONFERENCE ON COMPUTER VISION : PROCEEDINGS. EUROPEAN CONFERENCE ON COMPUTER VISION 2022; 13681:661-678. [PMID: 37525827 PMCID: PMC10388376 DOI: 10.1007/978-3-031-19803-8_39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
The application of modern machine learning to retinal image analyses offers valuable insights into a broad range of human health conditions beyond ophthalmic diseases. Additionally, data sharing is key to fully realizing the potential of machine learning models by providing a rich and diverse collection of training data. However, the personallyidentifying nature of retinal images, encompassing the unique vascular structure of each individual, often prevents this data from being shared openly. While prior works have explored image de-identification strategies based on synthetic averaging of images in other domains (e.g. facial images), existing techniques face difficulty in preserving both privacy and clinical utility in retinal images, as we demonstrate in our work. We therefore introduce k-SALSA, a generative adversarial network (GAN)-based framework for synthesizing retinal fundus images that summarize a given private dataset while satisfying the privacy notion of k-anonymity. k-SALSA brings together state-of-the-art techniques for training and inverting GANs to achieve practical performance on retinal images. Furthermore, k-SALSA leverages a new technique, called local style alignment, to generate a synthetic average that maximizes the retention of fine-grain visual patterns in the source images, thus improving the clinical utility of the generated images. On two benchmark datasets of diabetic retinopathy (EyePACS and APTOS), we demonstrate our improvement upon existing methods with respect to image fidelity, classification performance, and mitigation of membership inference attacks. Our work represents a step toward broader sharing of retinal images for scientific collaboration. Code is available at https://github.com/hcholab/k-salsa.
Collapse
Affiliation(s)
- Minkyu Jeon
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Korea University, Seoul, Republic of Korea
| | | | | | - Michael Morley
- Harvard Medical School, Boston, MA, USA
- Ophthalmic Consultants of Boston, Boston, MA, USA
| | - Hyunghoon Cho
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| |
Collapse
|
16
|
Huang X, Wang H, She C, Feng J, Liu X, Hu X, Chen L, Tao Y. Artificial intelligence promotes the diagnosis and screening of diabetic retinopathy. Front Endocrinol (Lausanne) 2022; 13:946915. [PMID: 36246896 PMCID: PMC9559815 DOI: 10.3389/fendo.2022.946915] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Deep learning evolves into a new form of machine learning technology that is classified under artificial intelligence (AI), which has substantial potential for large-scale healthcare screening and may allow the determination of the most appropriate specific treatment for individual patients. Recent developments in diagnostic technologies facilitated studies on retinal conditions and ocular disease in metabolism and endocrinology. Globally, diabetic retinopathy (DR) is regarded as a major cause of vision loss. Deep learning systems are effective and accurate in the detection of DR from digital fundus photographs or optical coherence tomography. Thus, using AI techniques, systems with high accuracy and efficiency can be developed for diagnosing and screening DR at an early stage and without the resources that are only accessible in special clinics. Deep learning enables early diagnosis with high specificity and sensitivity, which makes decisions based on minimally handcrafted features paving the way for personalized DR progression real-time monitoring and in-time ophthalmic or endocrine therapies. This review will discuss cutting-edge AI algorithms, the automated detecting systems of DR stage grading and feature segmentation, the prediction of DR outcomes and therapeutics, and the ophthalmic indications of other systemic diseases revealed by AI.
Collapse
Affiliation(s)
- Xuan Huang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chongyang She
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xuhui Liu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xiaofeng Hu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Li Chen
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Yong Tao
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yong Tao,
| |
Collapse
|