1
|
Verma PK, Kaur J. Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1783-1799. [PMID: 38438695 PMCID: PMC11300804 DOI: 10.1007/s10278-024-01010-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/06/2024]
Abstract
Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.
Collapse
Affiliation(s)
- Prem Kumari Verma
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India.
| | - Jagdeep Kaur
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India
| |
Collapse
|
2
|
Faroog Z, Dirar QSE, Zaidi ARZ, Khan MS, Mahamud G, Ambia SR, Al-Hazzaa S. Knowledge and attitude of medical students towards artificial intelligence in ophthalmology in Riyadh, Saudi Arabia: a cross-sectional study. Ann Med Surg (Lond) 2024; 86:4377-4383. [PMID: 39118699 PMCID: PMC11305754 DOI: 10.1097/ms9.0000000000002238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/23/2024] [Indexed: 08/10/2024] Open
Abstract
Background The use of artificial intelligence (AI) in ophthalmology represents a transformative leap in healthcare. AI-powered technologies, such as machine learning and computer vision, enhance the accuracy and efficiency of ophthalmic diagnosis and treatment. Objective This study aimed to determine medical students' awareness and attitudes towards the use of artificial intelligence in ophthalmology. Methods This cross-sectional, questionnaire-based study was conducted between November 2022 and January 2023 using online questionnaires. Data collection was carried out using convenience sampling among medical students at the University. IBM SPSS version 23 was used to analyze the data. Results The current finding shows that most of the participants N=309 (89.6%) had heard of the use of AI in medicine, and N=294 (85.2%) heard of the use of AI in ophthalmology. 98.6% (n=340) of respondents believed AI would be a helpful tool in ophthalmology. Along this line of questioning, a significant majority of respondents, 332 (96.2%) selected screening, 332 (96.2%) selected diagnosis, and 293 (84.9%) selected prevention as a usage of AI ophthalmology. However, the majority, 76.5%) of students had little understanding of the development of AI in ophthalmology. In addition, a significant relationship between sex, academic year, cumulative GPA (cGPA), and awareness of AI in ophthalmology (P<0.001) was found in this study. Conclusions Overall, medical students in Saudi Arabia appear to have favorable thoughts about AI and positive perceptions towards AI in ophthalmology. However, the findings of this study emphasize the limited understanding and low confidence levels of medical students in Saudi Arabia regarding the use of AI in ophthalmology. As a result, early exposure to AI-related materials in medical curricula is crucial for addressing these challenges through comprehensive AI education and practical exposure to prepare future ophthalmologists.
Collapse
Affiliation(s)
| | | | - Abdul Rehman Zia Zaidi
- Department of Family & Community Medicine, College of Medicine, Alfaisal University, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Golam Mahamud
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Selwa Al-Hazzaa
- King Abdulaziz City for Science & Technology (KACST), Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Yew SME, Chen Y, Goh JHL, Chen DZ, Chun Jin Tan M, Cheng CY, Teck Chang Koh V, Tham YC. Ocular image-based deep learning for predicting refractive error: A systematic review. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2024; 4:164-172. [PMID: 39114269 PMCID: PMC11305245 DOI: 10.1016/j.aopr.2024.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 08/10/2024]
Abstract
Background Uncorrected refractive error is a major cause of vision impairment worldwide and its increasing prevalent necessitates effective screening and management strategies. Meanwhile, deep learning, a subset of Artificial Intelligence, has significantly advanced ophthalmological diagnostics by automating tasks that required extensive clinical expertise. Although recent studies have investigated the use of deep learning models for refractive power detection through various imaging techniques, a comprehensive systematic review on this topic is has yet be done. This review aims to summarise and evaluate the performance of ocular image-based deep learning models in predicting refractive errors. Main text We search on three databases (PubMed, Scopus, Web of Science) up till June 2023, focusing on deep learning applications in detecting refractive error from ocular images. We included studies that had reported refractive error outcomes, regardless of publication years. We systematically extracted and evaluated the continuous outcomes (sphere, SE, cylinder) and categorical outcomes (myopia), ground truth measurements, ocular imaging modalities, deep learning models, and performance metrics, adhering to PRISMA guidelines. Nine studies were identified and categorised into three groups: retinal photo-based (n = 5), OCT-based (n = 1), and external ocular photo-based (n = 3).For high myopia prediction, retinal photo-based models achieved AUC between 0.91 and 0.98, sensitivity levels between 85.10% and 97.80%, and specificity levels between 76.40% and 94.50%. For continuous prediction, retinal photo-based models reported MAE ranging from 0.31D to 2.19D, and R 2 between 0.05 and 0.96. The OCT-based model achieved an AUC of 0.79-0.81, sensitivity of 82.30% and 87.20% and specificity of 61.70%-68.90%. For external ocular photo-based models, the AUC ranged from 0.91 to 0.99, sensitivity of 81.13%-84.00% and specificity of 74.00%-86.42%, MAE ranges from 0.07D to 0.18D and accuracy ranges from 81.60% to 96.70%. The reported papers collectively showed promising performances, in particular the retinal photo-based and external eye photo -based DL models. Conclusions The integration of deep learning model and ocular imaging for refractive error detection appear promising. However, their real-world clinical utility in current screening workflow have yet been evaluated and would require thoughtful consideration in design and implementation.
Collapse
Affiliation(s)
- Samantha Min Er Yew
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yibing Chen
- School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technological University, Singapore
| | | | - David Ziyou Chen
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Marcus Chun Jin Tan
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Ching-Yu Cheng
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore
| | - Victor Teck Chang Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Yih-Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
4
|
Wang Y, Yang Z, Guo X, Jin W, Lin D, Chen A, Zhou M. Automated early detection of acute retinal necrosis from ultra-widefield color fundus photography using deep learning. EYE AND VISION (LONDON, ENGLAND) 2024; 11:27. [PMID: 39085922 PMCID: PMC11293155 DOI: 10.1186/s40662-024-00396-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 06/23/2024] [Indexed: 08/02/2024]
Abstract
BACKGROUND Acute retinal necrosis (ARN) is a relatively rare but highly damaging and potentially sight-threatening type of uveitis caused by infection with the human herpesvirus. Without timely diagnosis and appropriate treatment, ARN can lead to severe vision loss. We aimed to develop a deep learning framework to distinguish ARN from other types of intermediate, posterior, and panuveitis using ultra-widefield color fundus photography (UWFCFP). METHODS We conducted a two-center retrospective discovery and validation study to develop and validate a deep learning model called DeepDrARN for automatic uveitis detection and differentiation of ARN from other uveitis types using 11,508 UWFCFPs from 1,112 participants. Model performance was evaluated with the area under the receiver operating characteristic curve (AUROC), the area under the precision and recall curves (AUPR), sensitivity and specificity, and compared with seven ophthalmologists. RESULTS DeepDrARN for uveitis screening achieved an AUROC of 0.996 (95% CI: 0.994-0.999) in the internal validation cohort and demonstrated good generalizability with an AUROC of 0.973 (95% CI: 0.956-0.990) in the external validation cohort. DeepDrARN also demonstrated excellent predictive ability in distinguishing ARN from other types of uveitis with AUROCs of 0.960 (95% CI: 0.943-0.977) and 0.971 (95% CI: 0.956-0.986) in the internal and external validation cohorts. DeepDrARN was also tested in the differentiation of ARN, non-ARN uveitis (NAU) and normal subjects, with sensitivities of 88.9% and 78.7% and specificities of 93.8% and 89.1% in the internal and external validation cohorts, respectively. The performance of DeepDrARN is comparable to that of ophthalmologists and even exceeds the average accuracy of seven ophthalmologists, showing an improvement of 6.57% in uveitis screening and 11.14% in ARN identification. CONCLUSIONS Our study demonstrates the feasibility of deep learning algorithms in enabling early detection, reducing treatment delays, and improving outcomes for ARN patients.
Collapse
Affiliation(s)
- Yuqin Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zijian Yang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xingneng Guo
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Wang Jin
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Dan Lin
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Anying Chen
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, 315042, China
| | - Meng Zhou
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
5
|
Benetz BAM, Shivade VS, Joseph NM, Romig NJ, McCormick JC, Chen J, Titus MS, Sawant OB, Clover JM, Yoganathan N, Menegay HJ, O'Brien RC, Wilson DL, Lass JH. Automatic Determination of Endothelial Cell Density From Donor Cornea Endothelial Cell Images. Transl Vis Sci Technol 2024; 13:40. [PMID: 39177992 DOI: 10.1167/tvst.13.8.40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2024] Open
Abstract
Purpose To determine endothelial cell density (ECD) from real-world donor cornea endothelial cell (EC) images using a self-supervised deep learning segmentation model. Methods Two eye banks (Eversight, VisionGift) provided 15,138 single, unique EC images from 8169 donors along with their demographics, tissue characteristics, and ECD. This dataset was utilized for self-supervised training and deep learning inference. The Cornea Image Analysis Reading Center (CIARC) provided a second dataset of 174 donor EC images based on image and tissue quality. These images were used to train a supervised deep learning cell border segmentation model. Evaluation between manual and automated determination of ECD was restricted to the 1939 test EC images with at least 100 cells counted by both methods. Results The ECD measurements from both methods were in excellent agreement with rc of 0.77 (95% confidence interval [CI], 0.75-0.79; P < 0.001) and bias of 123 cells/mm2 (95% CI, 114-131; P < 0.001); 81% of the automated ECD values were within 10% of the manual ECD values. When the analysis was further restricted to the cropped image, the rc was 0.88 (95% CI, 0.87-0.89; P < 0.001), bias was 46 cells/mm2 (95% CI, 39-53; P < 0.001), and 93% of the automated ECD values were within 10% of the manual ECD values. Conclusions Deep learning analysis provides accurate ECDs of donor images, potentially reducing analysis time and training requirements. Translational Relevance The approach of this study, a robust methodology for automatically evaluating donor cornea EC images, could expand the quantitative determination of endothelial health beyond ECD.
Collapse
Affiliation(s)
- Beth Ann M Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | - Ved S Shivade
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Naomi M Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Nathan J Romig
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - John C McCormick
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jiawei Chen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Onkar B Sawant
- Eversight, Ann Arbor, MI, USA
- Center for Vision and Eye Banking Research, Eversight, Cleveland, OH, USA
| | | | | | - Harry J Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | - Robert C O'Brien
- Bascom Palmer Eye Institute, University of Miami, Miami, FL, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jonathan H Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| |
Collapse
|
6
|
Mathieu A, Ajana S, Korobelnik JF, Le Goff M, Gontier B, Rougier MB, Delcourt C, Delyfer MN. DeepAlienorNet: A deep learning model to extract clinical features from colour fundus photography in age-related macular degeneration. Acta Ophthalmol 2024; 102:e823-e830. [PMID: 38345159 DOI: 10.1111/aos.16660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/11/2024] [Accepted: 01/25/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE This study aimed to develop a deep learning (DL) model, named 'DeepAlienorNet', to automatically extract clinical signs of age-related macular degeneration (AMD) from colour fundus photography (CFP). METHODS AND ANALYSIS The ALIENOR Study is a cohort of French individuals 77 years of age or older. A multi-label DL model was developed to grade the presence of 7 clinical signs: large soft drusen (>125 μm), intermediate soft (63-125 μm), large area of soft drusen (total area >500 μm), presence of central soft drusen (large or intermediate), hyperpigmentation, hypopigmentation, and advanced AMD (defined as neovascular or atrophic AMD). Prediction performances were evaluated using cross-validation and the expert human interpretation of the clinical signs as the ground truth. RESULTS A total of 1178 images were included in the study. Averaging the 7 clinical signs' detection performances, DeepAlienorNet achieved an overall sensitivity, specificity, and AUROC of 0.77, 0.83, and 0.87, respectively. The model demonstrated particularly strong performance in predicting advanced AMD and large areas of soft drusen. It can also generate heatmaps, highlighting the relevant image areas for interpretation. CONCLUSION DeepAlienorNet demonstrates promising performance in automatically identifying clinical signs of AMD from CFP, offering several notable advantages. Its high interpretability reduces the black box effect, addressing ethical concerns. Additionally, the model can be easily integrated to automate well-established and validated AMD progression scores, and the user-friendly interface further enhances its usability. The main value of DeepAlienorNet lies in its ability to assist in precise severity scoring for further adapted AMD management, all while preserving interpretability.
Collapse
Affiliation(s)
- Alexis Mathieu
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Soufiane Ajana
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Jean-François Korobelnik
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Mélanie Le Goff
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Brigitte Gontier
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | | | - Cécile Delcourt
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Marie-Noëlle Delyfer
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
- FRCRnet/FCRIN Network, Bordeaux, France
| |
Collapse
|
7
|
Tao BKL, Hua N, Milkovich J, Micieli JA. ChatGPT-3.5 and Bing Chat in ophthalmology: an updated evaluation of performance, readability, and informative sources. Eye (Lond) 2024; 38:1897-1902. [PMID: 38509182 PMCID: PMC11226422 DOI: 10.1038/s41433-024-03037-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 03/04/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND/OBJECTIVES Experimental investigation. Bing Chat (Microsoft) integration with ChatGPT-4 (OpenAI) integration has conferred the capability of accessing online data past 2021. We investigate its performance against ChatGPT-3.5 on a multiple-choice question ophthalmology exam. SUBJECTS/METHODS In August 2023, ChatGPT-3.5 and Bing Chat were evaluated against 913 questions derived from the Academy's Basic and Clinical Science Collection collection. For each response, the sub-topic, performance, Simple Measure of Gobbledygook readability score (measuring years of required education to understand a given passage), and cited resources were collected. The primary outcomes were the comparative scores between models, and qualitatively, the resources referenced by Bing Chat. Secondary outcomes included performance stratified by response readability, question type (explicit or situational), and BCSC sub-topic. RESULTS Across 913 questions, ChatGPT-3.5 scored 59.69% [95% CI 56.45,62.94] while Bing Chat scored 73.60% [95% CI 70.69,76.52]. Both models performed significantly better in explicit than clinical reasoning questions. Both models performed best on general medicine questions than ophthalmology subsections. Bing Chat referenced 927 online entities and provided at-least one citation to 836 of the 913 questions. The use of more reliable (peer-reviewed) sources was associated with higher likelihood of correct response. The most-cited resources were eyewiki.aao.org, aao.org, wikipedia.org, and ncbi.nlm.nih.gov. Bing Chat showed significantly better readability than ChatGPT-3.5, averaging a reading level of grade 11.4 [95% CI 7.14, 15.7] versus 12.4 [95% CI 8.77, 16.1], respectively (p-value < 0.0001, ρ = 0.25). CONCLUSIONS The online access, improved readability, and citation feature of Bing Chat confers additional utility for ophthalmology learners. We recommend critical appraisal of cited sources during response interpretation.
Collapse
Affiliation(s)
- Brendan Ka-Lok Tao
- Faculty of Medicine, The University of British Columbia, 317-2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3, Canada
| | - Nicholas Hua
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada
| | - John Milkovich
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada
| | - Jonathan Andrew Micieli
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada.
- Department of Ophthalmology and Vision Sciences, University of Toronto, 340 College Street, Toronto, ON, M5T 3A9, Canada.
- Division of Neurology, Department of Medicine, University of Toronto, 6 Queen's Park Crescent West, Toronto, ON, M5S 3H2, Canada.
- Kensington Vision and Research Center, 340 College Street, Toronto, ON, M5T 3A9, Canada.
- St. Michael's Hospital, 36 Queen Street East, Toronto, ON, M5B 1W8, Canada.
- Toronto Western Hospital, 399 Bathurst Street, Toronto, ON, M5T 2S8, Canada.
- University Health Network, 190 Elizabeth Street, Toronto, ON, M5G 2C4, Canada.
| |
Collapse
|
8
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
9
|
Ahn J, Choi M. Advancements and turning point of artificial intelligence in ophthalmology: A comprehensive analysis of research trends and collaborative networks. Ophthalmic Physiol Opt 2024; 44:1031-1040. [PMID: 38581209 DOI: 10.1111/opo.13315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/08/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative force with great potential in various fields, including healthcare. In recent years, AI has garnered significant attention due to its potential to revolutionise ophthalmology, leading to advancements in patient care such as disease detection, diagnosis, treatment and monitoring of disease progression. This study presents a comprehensive analysis of the research trends and collaborative networks at the intersection of AI and ophthalmology. In this study, we conducted an extensive search of the Web of Science Core Collection to identify articles related to 'artificial intelligence' in ophthalmology published from 1968 to 2023. We performed co-occurrence keywords and co-authorship network analyses using VOSviewer software to explore the relationships between keywords and country collaboration. We found a remarkable surge in articles applying AI in ophthalmology after 2017, marking a turning point in the integration of AI within the medical field. The primary application of AI shifted towards the diagnosis of ocular disease, which was particularly evident through keywords such as glaucoma, diabetic retinopathy and age-related macular degeneration. Analysis of the collaboration networks of countries revealed a global expansion of ophthalmology-related AI research. This study provides valuable insights into the evolving landscape of AI integration in ophthalmology, indicating its growing potential for enhancing disease detection, diagnosis, treatment planning and monitoring of disease progression. In order to translate AI technologies into clinical practice effectively, it is imperative to comprehend the evolving research trends and advancements at the intersection of AI and ophthalmology.
Collapse
Affiliation(s)
- Jihye Ahn
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
| | - Moonsung Choi
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
- Convergence Institute of Biomedical Engineering and Biomaterials, Seoul National University of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
10
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
11
|
Sorrentino FS, Gardini L, Fontana L, Musa M, Gabai A, Maniaci A, Lavalle S, D’Esposito F, Russo A, Longo A, Surico PL, Gagliano C, Zeppieri M. Novel Approaches for Early Detection of Retinal Diseases Using Artificial Intelligence. J Pers Med 2024; 14:690. [PMID: 39063944 PMCID: PMC11278069 DOI: 10.3390/jpm14070690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND An increasing amount of people are globally affected by retinal diseases, such as diabetes, vascular occlusions, maculopathy, alterations of systemic circulation, and metabolic syndrome. AIM This review will discuss novel technologies in and potential approaches to the detection and diagnosis of retinal diseases with the support of cutting-edge machines and artificial intelligence (AI). METHODS The demand for retinal diagnostic imaging exams has increased, but the number of eye physicians or technicians is too little to meet the request. Thus, algorithms based on AI have been used, representing valid support for early detection and helping doctors to give diagnoses and make differential diagnosis. AI helps patients living far from hub centers to have tests and quick initial diagnosis, allowing them not to waste time in movements and waiting time for medical reply. RESULTS Highly automated systems for screening, early diagnosis, grading and tailored therapy will facilitate the care of people, even in remote lands or countries. CONCLUSION A potential massive and extensive use of AI might optimize the automated detection of tiny retinal alterations, allowing eye doctors to perform their best clinical assistance and to set the best options for the treatment of retinal diseases.
Collapse
Affiliation(s)
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, Alma Mater Studiorum University of Bologna, IRCCS Azienda Ospedaliero-Universitaria Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Andrea Gabai
- Department of Ophthalmology, Humanitas-San Pio X, 20159 Milan, Italy
| | - Antonino Maniaci
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Salvatore Lavalle
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Antonio Longo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
12
|
Qiu C, Su K, Luo Z, Tian Q, Zhao L, Wu L, Deng H, Shen H. Developing and comparing deep learning and machine learning algorithms for osteoporosis risk prediction. Front Artif Intell 2024; 7:1355287. [PMID: 38919268 PMCID: PMC11196804 DOI: 10.3389/frai.2024.1355287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 05/31/2024] [Indexed: 06/27/2024] Open
Abstract
Introduction Osteoporosis, characterized by low bone mineral density (BMD), is an increasingly serious public health issue. So far, several traditional regression models and machine learning (ML) algorithms have been proposed for predicting osteoporosis risk. However, these models have shown relatively low accuracy in clinical implementation. Recently proposed deep learning (DL) approaches, such as deep neural network (DNN), which can discover knowledge from complex hidden interactions, offer a new opportunity to improve predictive performance. In this study, we aimed to assess whether DNN can achieve a better performance in osteoporosis risk prediction. Methods By utilizing hip BMD and extensive demographic and routine clinical data of 8,134 subjects with age more than 40 from the Louisiana Osteoporosis Study (LOS), we developed and constructed a novel DNN framework for predicting osteoporosis risk and compared its performance in osteoporosis risk prediction with four conventional ML models, namely random forest (RF), artificial neural network (ANN), k-nearest neighbor (KNN), and support vector machine (SVM), as well as a traditional regression model termed osteoporosis self-assessment tool (OST). Model performance was assessed by area under 'receiver operating curve' (AUC) and accuracy. Results By using 16 discriminative variables, we observed that the DNN approach achieved the best predictive performance (AUC = 0.848) in classifying osteoporosis (hip BMD T-score ≤ -1.0) and non-osteoporosis risk (hip BMD T-score > -1.0) subjects, compared to the other approaches. Feature importance analysis showed that the top 10 most important variables identified by the DNN model were weight, age, gender, grip strength, height, beer drinking, diastolic pressure, alcohol drinking, smoke years, and economic level. Furthermore, we performed subsampling analysis to assess the effects of varying number of sample size and variables on the predictive performance of these tested models. Notably, we observed that the DNN model performed equally well (AUC = 0.846) even by utilizing only the top 10 most important variables for osteoporosis risk prediction. Meanwhile, the DNN model can still achieve a high predictive performance (AUC = 0.826) when sample size was reduced to 50% of the original dataset. Conclusion In conclusion, we developed a novel DNN model which was considered to be an effective algorithm for early diagnosis and intervention of osteoporosis in the aging population.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Hongwen Deng
- Tulane Center for Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University, New Orleans, LA, United States
| | - Hui Shen
- Tulane Center for Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University, New Orleans, LA, United States
| |
Collapse
|
13
|
Lim ZW, Li J, Wong D, Chung J, Toh A, Lee JL, Lam C, Balakrishnan M, Chia A, Chua J, Girard M, Hoang QV, Chong R, Wong CW, Saw SM, Schmetterer L, Brennan N, Ang M. Comparison of manual and artificial intelligence-automated choroidal thickness segmentation of optical coherence tomography imaging in myopic adults. EYE AND VISION (LONDON, ENGLAND) 2024; 11:21. [PMID: 38831465 PMCID: PMC11145894 DOI: 10.1186/s40662-024-00385-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 04/17/2024] [Indexed: 06/05/2024]
Abstract
BACKGROUND Myopia affects 1.4 billion individuals worldwide. Notably, there is increasing evidence that choroidal thickness plays an important role in myopia and risk of developing myopia-related conditions. With the advancements in artificial intelligence (AI), choroidal thickness segmentation can now be automated, offering inherent advantages such as better repeatability, reduced grader variability, and less reliance for manpower. Hence, we aimed to evaluate the agreement between AI-automated and manual segmented measurements of subfoveal choroidal thickness (SFCT) using two swept-source optical coherence tomography (OCT) systems. METHODS Subjects aged ≥ 16 years, with myopia of ≥ 0.50 diopters in both eyes, were recruited from the Prospective Myopia Cohort Study in Singapore (PROMYSE). OCT scans were acquired using Triton DRI-OCT and PLEX Elite 9000. OCT images were segmented both automatically with an established SA-Net architecture and manually using a standard technique with adjudication by two independent graders. SFCT was subsequently determined based on the segmentation. The Bland-Altman plot and intraclass correlation coefficient (ICC) were used to evaluate the agreement. RESULTS A total of 229 subjects (456 eyes) with mean [± standard deviation (SD)] age of 34.1 (10.4) years were included. The overall SFCT (mean ± SD) based on manual segmentation was 216.9 ± 82.7 µm with Triton DRI-OCT and 239.3 ± 84.3 µm with PLEX Elite 9000. ICC values demonstrated excellent agreement between AI-automated and manual segmented SFCT measurements (PLEX Elite 9000: ICC = 0.937, 95% CI: 0.922 to 0.949, P < 0.001; Triton DRI-OCT: ICC = 0.887, 95% CI: 0.608 to 0.950, P < 0.001). For PLEX Elite 9000, manual segmented measurements were generally thicker when compared to AI-automated segmented measurements, with a fixed bias of 6.3 µm (95% CI: 3.8 to 8.9, P < 0.001) and proportional bias of 0.120 (P < 0.001). On the other hand, manual segmented measurements were comparatively thinner than AI-automated segmented measurements for Triton DRI-OCT, with a fixed bias of - 26.7 µm (95% CI: - 29.7 to - 23.7, P < 0.001) and proportional bias of - 0.090 (P < 0.001). CONCLUSION We observed an excellent agreement in choroidal segmentation measurements when comparing manual with AI-automated techniques, using images from two SS-OCT systems. Given its edge over manual segmentation, automated segmentation may potentially emerge as the primary method of choroidal thickness measurement in the future.
Collapse
Affiliation(s)
- Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jonathan Li
- Department of Ophthalmology, University of California, San Francisco, CA, USA
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
| | - Joey Chung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Angeline Toh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jia Ling Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Crystal Lam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Maithily Balakrishnan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Audrey Chia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
| | - Michael Girard
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Quan V Hoang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Rachel Chong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Chee Wai Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Seang Mei Saw
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | | | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
14
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
15
|
Wang Y, Wei R, Yang D, Song K, Shen Y, Niu L, Li M, Zhou X. Development and validation of a deep learning model to predict axial length from ultra-wide field images. Eye (Lond) 2024; 38:1296-1300. [PMID: 38102471 PMCID: PMC11076502 DOI: 10.1038/s41433-023-02885-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/22/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND To validate the feasibility of building a deep learning model to predict axial length (AL) for moderate to high myopic patients from ultra-wide field (UWF) images. METHODS This study included 6174 UWF images from 3134 myopic patients during 2014 to 2020 in Eye and ENT Hospital of Fudan University. Of 6174 images, 4939 were used for training, 617 for validation, and 618 for testing. The coefficient of determination (R2), mean absolute error (MAE), and mean squared error (MSE) were used for model performance evaluation. RESULTS The model predicted AL with high accuracy. Evaluating performance of R2, MSE and MAE were 0.579, 1.419 and 0.9043, respectively. Prediction bias of 64.88% of the tests was under 1-mm error, 76.90% of tests was within the range of 5% error and 97.57% within 10% error. The prediction bias had a strong negative correlation with true AL values and showed significant difference between male and female (P < 0.001). Generated heatmaps demonstrated that the model focused on posterior atrophy changes in pathological fundus and peri-optic zone in normal fundus. In sex-specific models, R2, MSE, and MAE results of the female AL model were 0.411, 1.357, and 0.911 in female dataset and 0.343, 2.428, and 1.264 in male dataset. The corresponding metrics of male AL models were 0.216, 2.900, and 1.352 in male dataset and 0.083, 2.112, and 1.154 in female dataset. CONCLUSIONS It is feasible to utilize deep learning models to predict AL for moderate to high myopic patients with UWF images.
Collapse
Affiliation(s)
- Yunzhe Wang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Ruoyan Wei
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- Shanghai Medical College and Zhongshan Hospital Immunotherapy Translational Research Center, Shanghai, China
| | - Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Kaimin Song
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| |
Collapse
|
16
|
Wang X, Li H, Zheng H, Sun G, Wang W, Yi Z, Xu A, He L, Wang H, Jia W, Li Z, Li C, Ye M, Du B, Chen C. Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation. Ophthalmol Ther 2024; 13:1125-1144. [PMID: 38416330 DOI: 10.1007/s40123-024-00900-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/26/2024] [Indexed: 02/29/2024] Open
Abstract
INTRODUCTION Inaccurate, untimely diagnoses of fundus diseases leads to vision-threatening complications and even blindness. We built a deep learning platform (DLP) for automatic detection of 30 fundus diseases using ultra-widefield fluorescein angiography (UWFFA) with deep experts aggregation. METHODS This retrospective and cross-sectional database study included a total of 61,609 UWFFA images dating from 2016 to 2021, involving more than 3364 subjects in multiple centers across China. All subjects were divided into 30 different groups. The state-of-the-art convolutional neural network architecture, ConvNeXt, was chosen as the backbone to train and test the receiver operating characteristic curve (ROC) of the proposed system on test data and external test date. We compared the classification performance of the proposed system with that of ophthalmologists, including two retinal specialists. RESULTS We built a DLP to analyze UWFFA, which can detect up to 30 fundus diseases, with a frequency-weighted average area under the receiver operating characteristic curve (AUC) of 0.940 in the primary test dataset and 0.954 in the external multi-hospital test dataset. The tool shows comparable accuracy with retina specialists in diagnosis and evaluation. CONCLUSIONS This is the first study on a large-scale UWFFA dataset for multi-retina disease classification. We believe that our UWFFA DLP advances the diagnosis by artificial intelligence (AI) in various retinal diseases and would contribute to labor-saving and precision medicine especially in remote areas.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - He Li
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - A'min Xu
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Lu He
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Haiyan Wang
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Wei Jia
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Zhiqing Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Chang Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Mang Ye
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Bo Du
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China.
| |
Collapse
|
17
|
Yang CN, Chen WL, Yeh HH, Chu HS, Wu JH, Hsieh YT. Convolutional Neural Network-Based Prediction of Axial Length Using Color Fundus Photography. Transl Vis Sci Technol 2024; 13:23. [PMID: 38809531 DOI: 10.1167/tvst.13.5.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024] Open
Abstract
Purpose To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.
Collapse
Affiliation(s)
- Che-Ning Yang
- School of Medicine, National Taiwan University, Taipei, Taiwan
| | - Wei-Li Chen
- School of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Hsu-Hang Yeh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Hsiao-Sang Chu
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA
| | - Yi-Ting Hsieh
- School of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
18
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
19
|
Chang C, Shi W, Wang Y, Zhang Z, Huang X, Jiao Y. The path from task-specific to general purpose artificial intelligence for medical diagnostics: A bibliometric analysis. Comput Biol Med 2024; 172:108258. [PMID: 38467093 DOI: 10.1016/j.compbiomed.2024.108258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/08/2024] [Accepted: 03/06/2024] [Indexed: 03/13/2024]
Abstract
Artificial intelligence (AI) has revolutionized many fields, and its potential in healthcare has been increasingly recognized. Based on diverse data sources such as imaging, laboratory tests, medical records, and electrophysiological data, diagnostic AI has witnessed rapid development in recent years. A comprehensive understanding of the development status, contributing factors, and their relationships in the application of AI to medical diagnostics is essential to further promote its use in clinical practice. In this study, we conducted a bibliometric analysis to explore the evolution of task-specific to general-purpose AI for medical diagnostics. We used the Web of Science database to search for relevant articles published between 2010 and 2023, and applied VOSviewer, the R package Bibliometrix, and CiteSpace to analyze collaborative networks and keywords. Our analysis revealed that the field of AI in medical diagnostics has experienced rapid growth in recent years, with a focus on tasks such as image analysis, disease prediction, and decision support. Collaborative networks were observed among researchers and institutions, indicating a trend of global cooperation in this field. Additionally, we identified several key factors contributing to the development of AI in medical diagnostics, including data quality, algorithm design, and computational power. Challenges to progress in the field include model explainability, robustness, and equality, which will require multi-stakeholder, interdisciplinary collaboration to tackle. Our study provides a holistic understanding of the path from task-specific, mono-modal AI toward general-purpose, multimodal AI for medical diagnostics. With the continuous improvement of AI technology and the accumulation of medical data, we believe that AI will play a greater role in medical diagnostics in the future.
Collapse
Affiliation(s)
- Chuheng Chang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; 4+4 Medical Doctor Program, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Wen Shi
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Youyang Wang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Zhan Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.
| | - Xiaoming Huang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Yang Jiao
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| |
Collapse
|
20
|
Kim S, Park D, Shin Y, Kim MK, Jeon HS, Kim YG, Yoon CH. Deep learning-based fully automated grading system for dry eye disease severity. PLoS One 2024; 19:e0299776. [PMID: 38483911 PMCID: PMC10939279 DOI: 10.1371/journal.pone.0299776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/14/2024] [Indexed: 03/17/2024] Open
Abstract
There is an increasing need for an objective grading system to evaluate the severity of dry eye disease (DED). In this study, a fully automated deep learning-based system for the assessment of DED severity was developed. Corneal fluorescein staining (CFS) images of DED patients from one hospital for system development (n = 1400) and from another hospital for external validation (n = 94) were collected. Three experts graded the CFS images using NEI scale, and the median value was used as ground truth. The system was developed in three steps: (1) corneal segmentation, (2) CFS candidate region classification, and (3) estimation of NEI grades by CFS density map generation. Also, two images taken on different days in 50 eyes (100 images) were compared to evaluate the probability of improvement or deterioration. The Dice coefficient of the segmentation model was 0.962. The correlation between the system and the ground truth data was 0.868 (p<0.001) and 0.863 (p<0.001) for the internal and external validation datasets, respectively. The agreement rate for improvement or deterioration was 88% (44/50). The fully automated deep learning-based grading system for DED severity can evaluate the CFS score with high accuracy and thus may have potential for clinical application.
Collapse
Affiliation(s)
- Seonghwan Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
- Laboratory of Ocular Regenerative Medicine and Immunology, Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
| | - Daseul Park
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Youmin Shin
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
| | - Mee Kum Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
- Laboratory of Ocular Regenerative Medicine and Immunology, Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
- Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
| | - Hyun Sun Jeon
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, Korea
| | - Young-Gon Kim
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Chang Ho Yoon
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
- Laboratory of Ocular Regenerative Medicine and Immunology, Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
- Department of Ophthalmology, Seoul National University Hospital, Seoul, Korea
| |
Collapse
|
21
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
22
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
23
|
Ong KTI, Kwon T, Jang H, Kim M, Lee CS, Byeon SH, Kim SS, Yeo J, Choi EY. Multitask Deep Learning for Joint Detection of Necrotizing Viral and Noninfectious Retinitis From Common Blood and Serology Test Data. Invest Ophthalmol Vis Sci 2024; 65:5. [PMID: 38306107 PMCID: PMC10851173 DOI: 10.1167/iovs.65.2.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
Purpose Necrotizing viral retinitis is a serious eye infection that requires immediate treatment to prevent permanent vision loss. Uncertain clinical suspicion can result in delayed diagnosis, inappropriate administration of corticosteroids, or repeated intraocular sampling. To quickly and accurately distinguish between viral and noninfectious retinitis, we aimed to develop deep learning (DL) models solely using noninvasive blood test data. Methods This cross-sectional study trained DL models using common blood and serology test data from 3080 patients (noninfectious uveitis of the posterior segment [NIU-PS] = 2858, acute retinal necrosis [ARN] = 66, cytomegalovirus [CMV], retinitis = 156). Following the development of separate base DL models for ARN and CMV retinitis, multitask learning (MTL) was employed to enable simultaneous discrimination. Advanced MTL models incorporating adversarial training were used to enhance DL feature extraction from the small, imbalanced data. We evaluated model performance, disease-specific important features, and the causal relationship between DL features and detection results. Results The presented models all achieved excellent detection performances, with the adversarial MTL model achieving the highest receiver operating characteristic curves (0.932 for ARN and 0.982 for CMV retinitis). Significant features for ARN detection included varicella-zoster virus (VZV) immunoglobulin M (IgM), herpes simplex virus immunoglobulin G, and neutrophil count, while for CMV retinitis, they encompassed VZV IgM, CMV IgM, and lymphocyte count. The adversarial MTL model exhibited substantial changes in detection outcomes when the key features were contaminated, indicating stronger causality between DL features and detection results. Conclusions The adversarial MTL model, using blood test data, may serve as a reliable adjunct for the expedited diagnosis of ARN, CMV retinitis, and NIU-PS simultaneously in real clinical settings.
Collapse
Affiliation(s)
- Kai Tzu-iunn Ong
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Taeyoon Kwon
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Harok Jang
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Min Kim
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Christopher Seungkyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Suk Ho Byeon
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jinyoung Yeo
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Eun Young Choi
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
24
|
Du F, Zhao L, Luo H, Xing Q, Wu J, Zhu Y, Xu W, He W, Wu J. Recognition of eye diseases based on deep neural networks for transfer learning and improved D-S evidence theory. BMC Med Imaging 2024; 24:19. [PMID: 38238662 PMCID: PMC10797809 DOI: 10.1186/s12880-023-01176-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/06/2023] [Indexed: 01/22/2024] Open
Abstract
BACKGROUND Human vision has inspired significant advancements in computer vision, yet the human eye is prone to various silent eye diseases. With the advent of deep learning, computer vision for detecting human eye diseases has gained prominence, but most studies have focused only on a limited number of eye diseases. RESULTS Our model demonstrated a reduction in inherent bias and enhanced robustness. The fused network achieved an Accuracy of 0.9237, Kappa of 0.878, F1 Score of 0.914 (95% CI [0.875-0.954]), Precision of 0.945 (95% CI [0.928-0.963]), Recall of 0.89 (95% CI [0.821-0.958]), and an AUC value of ROC at 0.987. These metrics are notably higher than those of comparable studies. CONCLUSIONS Our deep neural network-based model exhibited improvements in eye disease recognition metrics over models from peer research, highlighting its potential application in this field. METHODS In deep learning-based eye recognition, to improve the learning efficiency of the model, we train and fine-tune the network by transfer learning. In order to eliminate the decision bias of the models and improve the credibility of the decisions, we propose a model decision fusion method based on the D-S theory. However, D-S theory is an incomplete and conflicting theory, we improve and eliminate the existed paradoxes, propose the improved D-S evidence theory(ID-SET), and apply it to the decision fusion of eye disease recognition models.
Collapse
Affiliation(s)
- Fanyu Du
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
- Faculty of Data Science, City University of Macau, Macau, 999078, China
- Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518000, China
| | - Lishuai Zhao
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
| | - Hui Luo
- Faculty of Data Science, City University of Macau, Macau, 999078, China
- School of Information and Management, Guangxi Medical University, Nanning, 530021, China
| | - Qijia Xing
- Affiliated Hospital of North Sichuan Medical College, Nanchong, 637000, China
| | - Jun Wu
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
| | - Yuanzhong Zhu
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
| | - Wansong Xu
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
| | - Wenjing He
- School of Medical Imaging, North Sichuan Medical College, Nanchong, 637000, China
| | - Jianfang Wu
- Faculty of Data Science, City University of Macau, Macau, 999078, China.
| |
Collapse
|
25
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
26
|
Li W, Bian L, Ma B, Sun T, Liu Y, Sun Z, Zhao L, Feng K, Yang F, Wang X, Chan S, Dou H, Qi H. Interpretable Detection of Diabetic Retinopathy, Retinal Vein Occlusion, Age-Related Macular Degeneration, and Other Fundus Conditions. Diagnostics (Basel) 2024; 14:121. [PMID: 38247998 DOI: 10.3390/diagnostics14020121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024] Open
Abstract
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models' ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen's kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen's kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen's kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape.
Collapse
Affiliation(s)
- Wenlong Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Linbo Bian
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Baikai Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Tong Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Yiyun Liu
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Zhengze Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Lin Zhao
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Kang Feng
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Fan Yang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Xiaona Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Szyyann Chan
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hongliang Dou
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hong Qi
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| |
Collapse
|
27
|
He W, Han X, Ong JS, Wu Y, Hewitt AW, Mackey DA, Gharahkhani P, MacGregor S. Genome-Wide Meta-analysis Identifies Risk Loci and Improves Disease Prediction of Age-Related Macular Degeneration. Ophthalmology 2024; 131:16-29. [PMID: 37634759 DOI: 10.1016/j.ophtha.2023.08.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 07/22/2023] [Accepted: 08/15/2023] [Indexed: 08/29/2023] Open
Abstract
PURPOSE To identify age-related macular degeneration (AMD) risk loci and to establish a polygenic prediction model. DESIGN Genome-wide association study (GWAS) and polygenic risk score (PRS) construction. PARTICIPANTS We included 64 885 European patients with AMD and 568 740 control participants (with overlapped samples) in the UK Biobank, Genetic Epidemiology Research on Aging (GERA), International AMD Consortium, FinnGen, and published early AMD GWASs in meta-analyses, as well as 733 European patients with AMD and 20 487 control participants from the Canadian Longitudinal Study on Aging (CLSA) and non-Europeans from the UK Biobank and GERA for polygenic risk score validation. METHODS A multitrait meta-analysis of GWASs comprised 64 885 patients with AMD and 568 740 control participants; the multitrait approach accounted for sample overlap. We constructed a PRS for AMD based on both previously reported as well as unreported AMD loci. We applied the PRS to nonoverlapping data from the CLSA. MAIN OUTCOME MEASURES We identified several single nucleotide polymorphisms associated with AMD and established a PRS for AMD risk prediction. RESULTS We identified 63 AMD risk loci alongside the well-established AMD loci CFH and ARMS2, including 9 loci that were not reported in previous GWASs, some of which previously were linked to other eye diseases such as glaucoma (e.g., HIC1). We applied our PRS to nonoverlapping data from the CLSA. A new PRS was constructed using the PRS method, PRS-CS, and significantly improved the prediction accuracy of AMD risk compared with PRSs from previously published datasets. We further showed that even people who carry all the well-known AMD risk alleles at CFH and ARMS2 vary considerably in their AMD risk (ranging from close to 0 in individuals with low PRS to > 50% in individuals with high PRS). Although our PRS was derived in individuals of European ancestry, the PRS shows potential for predicting risk in people of East Asian, South Asian, and Latino ancestry. CONCLUSIONS Our findings improve the knowledge of the genetic architecture of AMD and help achieve better accuracy in AMD prediction. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Weixiong He
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia; Faculty of Medicine, University of Queensland, Brisbane, Queensland, Australia.
| | - Xikun Han
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia; Faculty of Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Jue-Sheng Ong
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia
| | - Yeda Wu
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia; Faculty of Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Alex W Hewitt
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victorian, Australia; School of Medicine, Menzies Institute for Medical Research, University of Tasmania, Hobart, Tasmania, Australia
| | - David A Mackey
- Lions Eye Institute, Centre for Ophthalmology and Visual Science, University of Western Australia, Perth, Western Australia, Australia
| | - Puya Gharahkhani
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia
| | - Stuart MacGregor
- QIMR Berghofer Medical Research Institute, Brisbane, Queensland, Australia; Faculty of Medicine, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
28
|
Talcott KE, Valentim CCS, Perkins SW, Ren H, Manivannan N, Zhang Q, Bagherinia H, Lee G, Yu S, D'Souza N, Jarugula H, Patel K, Singh RP. Automated Detection of Abnormal Optical Coherence Tomography B-scans Using a Deep Learning Artificial Intelligence Neural Network Platform. Int Ophthalmol Clin 2024; 64:115-127. [PMID: 38146885 DOI: 10.1097/iio.0000000000000519] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
|
29
|
Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, Nishimura H, Khemlani RJ, Sato S, Hanyuda A, Sato Y. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep 2023; 13:22046. [PMID: 38086904 PMCID: PMC10716159 DOI: 10.1038/s41598-023-49563-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 12/09/2023] [Indexed: 12/18/2023] Open
Abstract
In ophthalmology, the availability of many fundus photographs and optical coherence tomography images has spurred consideration of using artificial intelligence (AI) for diagnosing retinal and optic nerve disorders. However, AI application for diagnosing anterior segment eye conditions remains unfeasible due to limited standardized images and analysis models. We addressed this limitation by augmenting the quantity of standardized optical images using a video-recordable slit-lamp device. We then investigated whether our proposed machine learning (ML) AI algorithm could accurately diagnose cataracts from videos recorded with this device. We collected 206,574 cataract frames from 1812 cataract eye videos. Ophthalmologists graded the nuclear cataracts (NUCs) using the cataract grading scale of the World Health Organization. These gradings were used to train and validate an ML algorithm. A validation dataset was used to compare the NUC diagnosis and grading of AI and ophthalmologists. The results of individual cataract gradings were: NUC 0: area under the curve (AUC) = 0.967; NUC 1: AUC = 0.928; NUC 2: AUC = 0.923; and NUC 3: AUC = 0.949. Our ML-based cataract diagnostic model achieved performance comparable to a conventional device, presenting a promising and accurate auto diagnostic AI tool.
Collapse
Affiliation(s)
- Eisuke Shimizu
- OUI Inc., Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan.
- Yokohama Keiai Eye Clinic, Yokohama, Japan.
| | - Makoto Tanji
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Shintato Nakayama
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Toshiki Ishikawa
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | | | | | - Hiroki Nishimura
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | | | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University, Tokyo, Japan
| |
Collapse
|
30
|
Vandevenne MM, Favuzza E, Veta M, Lucenteforte E, Berendschot TT, Mencucci R, Nuijts RM, Virgili G, Dickman MM. Artificial intelligence for detecting keratoconus. Cochrane Database Syst Rev 2023; 11:CD014911. [PMID: 37965960 PMCID: PMC10646985 DOI: 10.1002/14651858.cd014911.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND Keratoconus remains difficult to diagnose, especially in the early stages. It is a progressive disorder of the cornea that starts at a young age. Diagnosis is based on clinical examination and corneal imaging; though in the early stages, when there are no clinical signs, diagnosis depends on the interpretation of corneal imaging (e.g. topography and tomography) by trained cornea specialists. Using artificial intelligence (AI) to analyse the corneal images and detect cases of keratoconus could help prevent visual acuity loss and even corneal transplantation. However, a missed diagnosis in people seeking refractive surgery could lead to weakening of the cornea and keratoconus-like ectasia. There is a need for a reliable overview of the accuracy of AI for detecting keratoconus and the applicability of this automated method to the clinical setting. OBJECTIVES To assess the diagnostic accuracy of artificial intelligence (AI) algorithms for detecting keratoconus in people presenting with refractive errors, especially those whose vision can no longer be fully corrected with glasses, those seeking corneal refractive surgery, and those suspected of having keratoconus. AI could help ophthalmologists, optometrists, and other eye care professionals to make decisions on referral to cornea specialists. Secondary objectives To assess the following potential causes of heterogeneity in diagnostic performance across studies. • Different AI algorithms (e.g. neural networks, decision trees, support vector machines) • Index test methodology (preprocessing techniques, core AI method, and postprocessing techniques) • Sources of input to train algorithms (topography and tomography images from Placido disc system, Scheimpflug system, slit-scanning system, or optical coherence tomography (OCT); number of training and testing cases/images; label/endpoint variable used for training) • Study setting • Study design • Ethnicity, or geographic area as its proxy • Different index test positivity criteria provided by the topography or tomography device • Reference standard, topography or tomography, one or two cornea specialists • Definition of keratoconus • Mean age of participants • Recruitment of participants • Severity of keratoconus (clinically manifest or subclinical) SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), Ovid MEDLINE, Ovid Embase, OpenGrey, the ISRCTN registry, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). There were no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 29 November 2022. SELECTION CRITERIA We included cross-sectional and diagnostic case-control studies that investigated AI for the diagnosis of keratoconus using topography, tomography, or both. We included studies that diagnosed manifest keratoconus, subclinical keratoconus, or both. The reference standard was the interpretation of topography or tomography images by at least two cornea specialists. DATA COLLECTION AND ANALYSIS Two review authors independently extracted the study data and assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When an article contained multiple AI algorithms, we selected the algorithm with the highest Youden's index. We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS We included 63 studies, published between 1994 and 2022, that developed and investigated the accuracy of AI for the diagnosis of keratoconus. There were three different units of analysis in the studies: eyes, participants, and images. Forty-four studies analysed 23,771 eyes, four studies analysed 3843 participants, and 15 studies analysed 38,832 images. Fifty-four articles evaluated the detection of manifest keratoconus, defined as a cornea that showed any clinical sign of keratoconus. The accuracy of AI seems almost perfect, with a summary sensitivity of 98.6% (95% confidence interval (CI) 97.6% to 99.1%) and a summary specificity of 98.3% (95% CI 97.4% to 98.9%). However, accuracy varied across studies and the certainty of the evidence was low. Twenty-eight articles evaluated the detection of subclinical keratoconus, although the definition of subclinical varied. We grouped subclinical keratoconus, forme fruste, and very asymmetrical eyes together. The tests showed good accuracy, with a summary sensitivity of 90.0% (95% CI 84.5% to 93.8%) and a summary specificity of 95.5% (95% CI 91.9% to 97.5%). However, the certainty of the evidence was very low for sensitivity and low for specificity. In both groups, we graded most studies at high risk of bias, with high applicability concerns, in the domain of patient selection, since most were case-control studies. Moreover, we graded the certainty of evidence as low to very low due to selection bias, inconsistency, and imprecision. We could not explain the heterogeneity between the studies. The sensitivity analyses based on study design, AI algorithm, imaging technique (topography versus tomography), and data source (parameters versus images) showed no differences in the results. AUTHORS' CONCLUSIONS AI appears to be a promising triage tool in ophthalmologic practice for diagnosing keratoconus. Test accuracy was very high for manifest keratoconus and slightly lower for subclinical keratoconus, indicating a higher chance of missing a diagnosis in people without clinical signs. This could lead to progression of keratoconus or an erroneous indication for refractive surgery, which would worsen the disease. We are unable to draw clear and reliable conclusions due to the high risk of bias, the unexplained heterogeneity of the results, and high applicability concerns, all of which reduced our confidence in the evidence. Greater standardization in future research would increase the quality of studies and improve comparability between studies.
Collapse
Affiliation(s)
- Magali Ms Vandevenne
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Eleonora Favuzza
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Mitko Veta
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Ersilia Lucenteforte
- Department of Statistics, Computer Science and Applications «G. Parenti», University of Florence, Florence, Italy
| | - Tos Tjm Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Rita Mencucci
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Rudy Mma Nuijts
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Gianni Virgili
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
- Queen's University Belfast, Belfast, UK
| | - Mor M Dickman
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| |
Collapse
|
31
|
Karlin J, Gai L, LaPierre N, Danesh K, Farajzadeh J, Palileo B, Taraszka K, Zheng J, Wang W, Eskin E, Rootman D. Ensemble neural network model for detecting thyroid eye disease using external photographs. Br J Ophthalmol 2023; 107:1722-1729. [PMID: 36126104 DOI: 10.1136/bjo-2022-321833] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/22/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To describe an artificial intelligence platform that detects thyroid eye disease (TED). DESIGN Development of a deep learning model. METHODS 1944 photographs from a clinical database were used to train a deep learning model. 344 additional images ('test set') were used to calculate performance metrics. Receiver operating characteristic, precision-recall curves and heatmaps were generated. From the test set, 50 images were randomly selected ('survey set') and used to compare model performance with ophthalmologist performance. 222 images obtained from a separate clinical database were used to assess model recall and to quantitate model performance with respect to disease stage and grade. RESULTS The model achieved test set accuracy of 89.2%, specificity 86.9%, recall 93.4%, precision 79.7% and an F1 score of 86.0%. Heatmaps demonstrated that the model identified pixels corresponding to clinical features of TED. On the survey set, the ensemble model achieved accuracy, specificity, recall, precision and F1 score of 86%, 84%, 89%, 77% and 82%, respectively. 27 ophthalmologists achieved mean performance of 75%, 82%, 63%, 72% and 66%, respectively. On the second test set, the model achieved recall of 91.9%, with higher recall for moderate to severe (98.2%, n=55) and active disease (98.3%, n=60), as compared with mild (86.8%, n=68) or stable disease (85.7%, n=63). CONCLUSIONS The deep learning classifier is a novel approach to identify TED and is a first step in the development of tools to improve diagnostic accuracy and lower barriers to specialist evaluation.
Collapse
Affiliation(s)
- Justin Karlin
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Lisa Gai
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Nathan LaPierre
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Kayla Danesh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Justin Farajzadeh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Bea Palileo
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Kodi Taraszka
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Jie Zheng
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Wei Wang
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Eleazar Eskin
- Department of Computer Science, University of California, Los Angeles, California, USA
- Department of Human Genetics, University of California, Los Angeles, California, USA
| | - Daniel Rootman
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| |
Collapse
|
32
|
Zhao X, Lin Z, Yu S, Xiao J, Xie L, Xu Y, Tsui CK, Cui K, Zhao L, Zhang G, Zhang S, Lu Y, Lin H, Liang X, Lin D. An artificial intelligence system for the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. Cell Rep Med 2023; 4:101197. [PMID: 37734379 PMCID: PMC10591037 DOI: 10.1016/j.xcrm.2023.101197] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 05/29/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Abstract
Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.
Collapse
Affiliation(s)
- Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yue Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Ching-Kit Tsui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Kaixuan Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Yan Lu
- Foshan Second People's Hospital, Foshan 528001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China.
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
33
|
Leandro I, Lorenzo B, Aleksandar M, Dario M, Rosa G, Agostino A, Daniele T. OCT-based deep-learning models for the identification of retinal key signs. Sci Rep 2023; 13:14628. [PMID: 37670066 PMCID: PMC10480174 DOI: 10.1038/s41598-023-41362-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
A new system based on binary Deep Learning (DL) convolutional neural networks has been developed to recognize specific retinal abnormality signs on Optical Coherence Tomography (OCT) images useful for clinical practice. Images from the local hospital database were retrospectively selected from 2017 to 2022. Images were labeled by two retinal specialists and included central fovea cross-section OCTs. Nine models were developed using the Visual Geometry Group 16 architecture to distinguish healthy versus abnormal retinas and to identify eight different retinal abnormality signs. A total of 21,500 OCT images were screened, and 10,770 central fovea cross-section OCTs were included in the study. The system achieved high accuracy in identifying healthy retinas and specific pathological signs, ranging from 93 to 99%. Accurately detecting abnormal retinal signs from OCT images is crucial for patient care. This study aimed to identify specific signs related to retinal pathologies, aiding ophthalmologists in diagnosis. The high-accuracy system identified healthy retinas and pathological signs, making it a useful diagnostic aid. Labelled OCT images remain a challenge, but our approach reduces dataset creation time and shows DL models' potential to improve ocular pathology diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Inferrera Leandro
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy.
| | - Borsatti Lorenzo
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | | | - Marangoni Dario
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Giglio Rosa
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Accardo Agostino
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Tognetto Daniele
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| |
Collapse
|
34
|
He S, Bulloch G, Zhang L, Xie Y, Wu W, He Y, Meng W, Shi D, He M. Cross-camera Performance of Deep Learning Algorithms to Diagnose Common Ophthalmic Diseases: A Comparative Study Highlighting Feasibility to Portable Fundus Camera Use. Curr Eye Res 2023; 48:857-863. [PMID: 37246918 DOI: 10.1080/02713683.2023.2215984] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/19/2023] [Accepted: 05/14/2023] [Indexed: 05/30/2023]
Abstract
PURPOSE To compare the inter-camera performance and consistency of various deep learning (DL) diagnostic algorithms applied to fundus images taken from desktop Topcon and portable Optain cameras. METHODS Participants over 18 years of age were enrolled between November 2021 and April 2022. Pair-wise fundus photographs from each patient were collected in a single visit; once by Topcon (used as the reference camera) and once by a portable Optain camera (the new target camera). These were analyzed by three previously validated DL models for the detection of diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucomatous optic neuropathy (GON). Ophthalmologists manually analyzed all fundus photos for the presence of DR and these were referred to as the ground truth. Sensitivity, specificity, the area under the curve (AUC) and agreement between cameras (estimated by Cohen's weighted kappa, K) were the primary outcomes of this study. RESULTS A total of 504 patients were recruited. After excluding 12 photographs with matching errors and 59 photographs with low quality, 906 pairs of Topcon-Optain fundus photos were available for algorithm assessment. Topcon and Optain cameras had excellent consistency (Κ=0.80) when applied to the referable DR algorithm, while AMD had moderate consistency (Κ=0.41) and GON had poor consistency (Κ=0.32). For the DR model, Topcon and Optain achieved a sensitivity of 97.70% and 97.67% and a specificity of 97.92% and 97.93%, respectively. There was no significant difference between the two camera models (McNemar's test: x2=0.08, p = .78). CONCLUSION Topcon and Optain cameras had excellent consistency for detecting referable DR, albeit performances for detecting AMD and GON models were unsatisfactory. This study highlights the methods of using pair-wise images to evaluate DL models between reference and new fundus cameras.
Collapse
Affiliation(s)
- Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Gabriella Bulloch
- University of Melbourne, Melbourne, Victoria, Australia
- Centre for Eye Research Australia, Melbourne, Victoria, Australia
| | - Liangxin Zhang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yiyu Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Weiyu Wu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Yahong He
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Wei Meng
- Eyetelligence Ltd, Melbourne, Victoria, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- University of Melbourne, Melbourne, Victoria, Australia
- Centre for Eye Research Australia, Melbourne, Victoria, Australia
- Eyetelligence Ltd, Melbourne, Victoria, Australia
| |
Collapse
|
35
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
36
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
37
|
Brandl C, Finger RP, Heid IM, Mauschitz MM. Age-Related Macular Degeneration in an Ageing Society - Current Epidemiological Research. Klin Monbl Augenheilkd 2023; 240:1052-1059. [PMID: 37666251 DOI: 10.1055/a-2105-1064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Epidemiological studies on age-related macular degeneration (AMD) provide crucial data on the frequency of early and late forms as well as associated risk factors. The increasing number of population-based cross-sectional and longitudinal cohort studies in Germany and Europe with published data is making prevalence and incidence estimators for AMD more robust, although they show mostly method-related fluctuations. This review article brings together the latest published epidemiological measures for AMD from Germany and Central as well as Western Europe. Based on this data and population figures for Germany and Europe, prevalence is projected, and future trends are forecasted. The epidemiological evidence for AMD-associated risk factors is also improving, especially through meta-analyses within large consortia with correspondingly high case numbers. This review article summarizes the latest findings and resulting recommendations for prevention approaches. Additionally, it discusses treatment options and future challenges.
Collapse
Affiliation(s)
- Caroline Brandl
- Universitäts-Augenklinik Regensburg, Universität Regensburg, Fakultät für Medizin, Deutschland
- Lehrstuhl für Genetische Epidemiologie, Universität Regensburg, Fakultät für Medizin, Deutschland
| | - Robert Patrick Finger
- Universitäts-Augenklinik, Universitätsmedizin Mannheim, Medizinische Fakultät Mannheim, Ruprecht-Karls-Universität Heidelberg, Mannheim, Deutschland
- Universitäts-Augenklinik Bonn, Universität Bonn, Deutschland
| | - Iris Maria Heid
- Lehrstuhl für Genetische Epidemiologie, Universität Regensburg, Fakultät für Medizin, Deutschland
| | | |
Collapse
|
38
|
Tan TF, Dai P, Zhang X, Jin L, Poh S, Hong D, Lim J, Lim G, Teo ZL, Liu N, Ting DSW. Explainable artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:422-430. [PMID: 37527200 DOI: 10.1097/icu.0000000000000983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
PURPOSE OF REVIEW Despite the growing scope of artificial intelligence (AI) and deep learning (DL) applications in the field of ophthalmology, most have yet to reach clinical adoption. Beyond model performance metrics, there has been an increasing emphasis on the need for explainability of proposed DL models. RECENT FINDINGS Several explainable AI (XAI) methods have been proposed, and increasingly applied in ophthalmological DL applications, predominantly in medical imaging analysis tasks. SUMMARY We summarize an overview of the key concepts, and categorize some examples of commonly employed XAI methods. Specific to ophthalmology, we explore XAI from a clinical perspective, in enhancing end-user trust, assisting clinical management, and uncovering new insights. We finally discuss its limitations and future directions to strengthen XAI for application to clinical practice.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Peilun Dai
- Institute of High Performance Computing, A∗STAR
| | - Xiaoman Zhang
- Duke-National University of Singapore Medical School, Singapore
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Stanley Poh
- Singapore National Eye Centre, Singapore General Hospital
| | - Dylan Hong
- Artificial Intelligence and Digital Innovation Research Group
| | - Joshua Lim
- Singapore National Eye Centre, Singapore General Hospital
| | - Gilbert Lim
- Artificial Intelligence and Digital Innovation Research Group
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Nan Liu
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
- Duke-National University of Singapore Medical School, Singapore
- Byers Eye Institute, Stanford University, Stanford, California, USA
| |
Collapse
|
39
|
Linde G, Chalakkal R, Zhou L, Huang JL, O’Keeffe B, Shah D, Davidson S, Hong SC. Automatic Refractive Error Estimation Using Deep Learning-Based Analysis of Red Reflex Images. Diagnostics (Basel) 2023; 13:2810. [PMID: 37685347 PMCID: PMC10486607 DOI: 10.3390/diagnostics13172810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/23/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Purpose/Background: We evaluate how a deep learning model can be applied to extract refractive error metrics from pupillary red reflex images taken by a low-cost handheld fundus camera. This could potentially provide a rapid and economical vision-screening method, allowing for early intervention to prevent myopic progression and reduce the socioeconomic burden associated with vision impairment in the later stages of life. Methods: Infrared and color images of pupillary crescents were extracted from eccentric photorefraction images of participants from Choithram Hospital in India and Dargaville Medical Center in New Zealand. The pre-processed images were then used to train different convolutional neural networks to predict refractive error in terms of spherical power and cylindrical power metrics. Results: The best-performing trained model achieved an overall accuracy of 75% for predicting spherical power using infrared images and a multiclass classifier. Conclusions: Even though the model's performance is not superior, the proposed method showed good usability of using red reflex images in estimating refractive error. Such an approach has never been experimented with before and can help guide researchers, especially when the future of eye care is moving towards highly portable and smartphone-based devices.
Collapse
Affiliation(s)
| | | | - Lydia Zhou
- University of Sydney, Sydney, NSW 2050, Australia
| | | | | | | | | | - Sheng Chiong Hong
- Public Health Unit, Dunedin Hospital, Te Whatu Ora Southern, Dunedin 9016, New Zealand
| |
Collapse
|
40
|
Azzopardi M, Chong YJ, Ng B, Recchioni A, Logeswaran A, Ting DSJ. Diagnosis of Acanthamoeba Keratitis: Past, Present and Future. Diagnostics (Basel) 2023; 13:2655. [PMID: 37627913 PMCID: PMC10453105 DOI: 10.3390/diagnostics13162655] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/04/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
Acanthamoeba keratitis (AK) is a painful and sight-threatening parasitic corneal infection. In recent years, the incidence of AK has increased. Timely and accurate diagnosis is crucial during the management of AK, as delayed diagnosis often results in poor clinical outcomes. Currently, AK diagnosis is primarily achieved through a combination of clinical suspicion, microbiological investigations and corneal imaging. Historically, corneal scraping for microbiological culture has been considered to be the gold standard. Despite its technical ease, accessibility and cost-effectiveness, the long diagnostic turnaround time and variably low sensitivity of microbiological culture limit its use as a sole diagnostic test for AK in clinical practice. In this review, we aim to provide a comprehensive overview of the diagnostic modalities that are currently used to diagnose AK, including microscopy with staining, culture, corneal biopsy, in vivo confocal microscopy, polymerase chain reaction and anterior segment optical coherence tomography. We also highlight emerging techniques, such as next-generation sequencing and artificial intelligence-assisted models, which have the potential to transform the diagnostic landscape of AK.
Collapse
Affiliation(s)
- Matthew Azzopardi
- Department of Ophthalmology, Royal London Hospital, London E1 1BB, UK;
| | - Yu Jeat Chong
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
| | - Benjamin Ng
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
| | - Alberto Recchioni
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham B15 2TT, UK
| | | | - Darren S. J. Ting
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham B15 2TT, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham NG7 2RD, UK
| |
Collapse
|
41
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
42
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
43
|
El-Den NN, Naglah A, Elsharkawy M, Ghazal M, Alghamdi NS, Sandhu H, Mahdi H, El-Baz A. Scale-adaptive model for detection and grading of age-related macular degeneration from color retinal fundus images. Sci Rep 2023; 13:9590. [PMID: 37311794 PMCID: PMC10264426 DOI: 10.1038/s41598-023-35197-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 05/14/2023] [Indexed: 06/15/2023] Open
Abstract
Age-related Macular Degeneration (AMD), a retinal disease that affects the macula, can be caused by aging abnormalities in number of different cells and tissues in the retina, retinal pigment epithelium, and choroid, leading to vision loss. An advanced form of AMD, called exudative or wet AMD, is characterized by the ingrowth of abnormal blood vessels beneath or into the macula itself. The diagnosis is confirmed by either fundus auto-fluorescence imaging or optical coherence tomography (OCT) supplemented by fluorescein angiography or OCT angiography without dye. Fluorescein angiography, the gold standard diagnostic procedure for AMD, involves invasive injections of fluorescent dye to highlight retinal vasculature. Meanwhile, patients can be exposed to life-threatening allergic reactions and other risks. This study proposes a scale-adaptive auto-encoder-based model integrated with a deep learning model that can detect AMD early by automatically analyzing the texture patterns in color fundus imaging and correlating them to the vasculature activity in the retina. Moreover, the proposed model can automatically distinguish between AMD grades assisting in early diagnosis and thus allowing for earlier treatment of the patient's condition, slowing the disease and minimizing its severity. Our model features two main blocks, the first is an auto-encoder-based network for scale adaption, and the second is a convolutional neural network (CNN) classification network. Based on a conducted set of experiments, the proposed model achieves higher diagnostic accuracy compared to other models with accuracy, sensitivity, and specificity that reach 96.2%, 96.2%, and 99%, respectively.
Collapse
Affiliation(s)
- Niveen Nasr El-Den
- Department of Computer and System Engineering, Faculty of Engineering, Ain Shams University, Cairo, Egypt
| | - Ahmed Naglah
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Mohamed Elsharkawy
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi, United Arab Emirates
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Harpal Sandhu
- Department of Bioengineering, University of Louisville, Louisville, KY, USA
| | - Hani Mahdi
- Department of Computer and System Engineering, Faculty of Engineering, Ain Shams University, Cairo, Egypt
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
44
|
Essalat M, Abolhosseini M, Le TH, Moshtaghion SM, Kanavi MR. Interpretable deep learning for diagnosis of fungal and acanthamoeba keratitis using in vivo confocal microscopy images. Sci Rep 2023; 13:8953. [PMID: 37268665 DOI: 10.1038/s41598-023-35085-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/12/2023] [Indexed: 06/04/2023] Open
Abstract
Infectious keratitis refers to a group of corneal disorders in which corneal tissues suffer inflammation and damage caused by pathogenic infections. Among these disorders, fungal keratitis (FK) and acanthamoeba keratitis (AK) are particularly severe and can cause permanent blindness if not diagnosed early and accurately. In Vivo Confocal Microscopy (IVCM) allows for imaging of different corneal layers and provides an important tool for an early and accurate diagnosis. In this paper, we introduce the IVCM-Keratitis dataset, which comprises of a total of 4001 sample images of AK and FK, as well as non-specific keratitis (NSK) and healthy corneas classes. We use this dataset to develop multiple deep-learning models based on Convolutional Neural Networks (CNNs) to provide automated assistance in enhancing the diagnostic accuracy of confocal microscopy in infectious keratitis. Densenet161 had the best performance among these models, with an accuracy, precision, recall, and F1 score of 93.55%, 92.52%, 94.77%, and 96.93%, respectively. Our study highlights the potential of deep learning models to provide automated diagnostic assistance for infectious keratitis via confocal microscopy images, particularly in the early detection of AK and FK. The proposed model can provide valuable support to both experienced and inexperienced eye-care practitioners in confocal microscopy image analysis, by suggesting the most likely diagnosis. We further demonstrate that these models can highlight the areas of infection in the IVCM images and explain the reasons behind their diagnosis by utilizing saliency maps, a technique used in eXplainable Artificial Intelligence (XAI) to interpret these models.
Collapse
Affiliation(s)
- Mahmoud Essalat
- Department of Electrical and Computer Engineering, University of California, Los Angeles, 56-125B Engineering IV Building, UCLA, 420 Westwood Plaza, Los Angeles, CA, 90095-1594, USA.
| | - Mohammad Abolhosseini
- Ocular Tissue Engineering Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, No.23, Paidarfard St., Boostan 9 St., Pasdaran Ave., Tehran, 1666673111, Iran
- Department of Confocal Scan, Central Eye Bank of Iran, Tehran, Iran
| | - Thanh Huy Le
- Department of Computer Science, University of California, San Diego, CA, USA
| | - Seyed Mohamadmehdi Moshtaghion
- Ocular Tissue Engineering Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, No.23, Paidarfard St., Boostan 9 St., Pasdaran Ave., Tehran, 1666673111, Iran
- Department of Confocal Scan, Central Eye Bank of Iran, Tehran, Iran
| | - Mozhgan Rezaei Kanavi
- Ocular Tissue Engineering Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, No.23, Paidarfard St., Boostan 9 St., Pasdaran Ave., Tehran, 1666673111, Iran.
- Department of Confocal Scan, Central Eye Bank of Iran, Tehran, Iran.
| |
Collapse
|
45
|
Eslami Y, Mousavi Kouzahkanan Z, Farzinvash Z, Safizadeh M, Zarei R, Fakhraie G, Vahedian Z, Mahmoudi T, Fadakar K, Beikmarzehei A, Tabatabaei SM. Deep Learning-Based Classification of Subtypes of Primary Angle-Closure Disease With Anterior Segment Optical Coherence Tomography. J Glaucoma 2023; 32:540-547. [PMID: 36897658 DOI: 10.1097/ijg.0000000000002194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 02/08/2023] [Indexed: 03/11/2023]
Abstract
PRCIS We developed a deep learning-based classifier that can discriminate primary angle closure suspects (PACS), primary angle closure (PAC)/primary angle closure glaucoma (PACG), and also control eyes with open angle with acceptable accuracy. PURPOSE To develop a deep learning-based classifier for differentiating subtypes of primary angle closure disease, including PACS and PAC/PACG, and also normal control eyes. MATERIALS AND METHODS Anterior segment optical coherence tomography images were used for analysis with 5 different networks including MnasNet, MobileNet, ResNet18, ResNet50, and EfficientNet. The data set was split with randomization performed at the patient level into a training plus validation set (85%), and a test data set (15%). Then 4-fold cross-validation was used to train the model. In each mentioned architecture, the networks were trained with original and cropped images. Also, the analyses were carried out for single images and images grouped on the patient level (case-based). Then majority voting was applied to the determination of the final prediction. RESULTS A total of 1616 images of normal eyes (87 eyes), 1055 images of PACS (66 eyes), and 1076 images of PAC/PACG (66 eyes) eyes were included in the analysis. The mean ± SD age was 51.76 ± 15.15 years and 48.3% were males. MobileNet had the best performance in the model, in which both original and cropped images were used. The accuracy of MobileNet for detecting normal, PACS, and PAC/PACG eyes was 0.99 ± 0.00, 0.77 ± 0.02, and 0.77 ± 0.03, respectively. By running MobileNet in a case-based classification approach, the accuracy improved and reached 0.95 ± 0.03, 0.83 ± 0.06, and 0.81 ± 0.05, respectively. For detecting the open angle, PACS, and PAC/PACG, the MobileNet classifier achieved an area under the curve of 1, 0.906, and 0.872, respectively, on the test data set. CONCLUSION The MobileNet-based classifier can detect normal, PACS, and PAC/PACG eyes with acceptable accuracy based on anterior segment optical coherence tomography images.
Collapse
Affiliation(s)
- Yadollah Eslami
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Zahra Farzinvash
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mona Safizadeh
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Zarei
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Ghasem Fakhraie
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Zakieh Vahedian
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tahereh Mahmoudi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Kaveh Fadakar
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Seyed Mehdi Tabatabaei
- Glaucoma Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
46
|
Tong Y, Jie B, Wang X, Xu Z, Ding P, He Y. Is Convolutional Neural Network Accurate for Automatic Detection of Zygomatic Fractures on Computed Tomography? J Oral Maxillofac Surg 2023:S0278-2391(23)00393-2. [PMID: 37217163 DOI: 10.1016/j.joms.2023.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE Zygomatic fractures involve complex anatomical structures of the mid-face and the diagnosis can be challenging and labor-consuming. This research is aimed to evaluate the performance of an automatic algorithm for the detection of zygomatic fractures based on convolutional neural network (CNN) on spiral computed tomography (CT). MATERIALS AND METHODS We designed a cross-sectional retrospective diagnostic trial study. Clinical records and CT scans of patients with zygomatic fractures were reviewed. The sample consisted of two types of patients with different zygomatic fractures statuses (positive or negative) in Peking University School of Stomatology from 2013 to 2019. All CT samples were randomly divided into three groups at a ratio of 6:2:2 as training set, validation set, and test set, respectively. All CT scans were viewed and annotated by three experienced maxillofacial surgeons, serving as the gold standard. The algorithm consisted of two modules as follows: (1) segmentation of the zygomatic region of CT based on U-Net, a type of CNN model; (2) detection of fractures based on Deep Residual Network 34(ResNet34). The region segmentation model was used first to detect and extract the zygomatic region, then the detection model was used to detect the fracture status. The Dice coefficient was used to evaluate the performance of the segmentation algorithm. The sensitivity and specificity were used to assess the performance of the detection model. The covariates included age, gender, duration of injury, and the etiology of fractures. RESULTS A total of 379 patients with an average age of 35.43 ± 12.74 years were included in the study. There were 203 nonfracture patients and 176 fracture patients with 220 sites of zygomatic fractures (44 patients underwent bilateral fractures). The Dice coefficientof zygomatic region detection model and gold standard verified by manual labeling were 0.9337 (coronal plane) and 0.9269 (sagittal plane), respectively. The sensitivity and specificity of the fracture detection model were 100% (p>.05). CONCLUSION The performance of the algorithm based on CNNs was not statistically different from the gold standard (manual diagnosis) for zygomatic fracture detection in order for the algorithm to be applied clinically.
Collapse
Affiliation(s)
- Yanhang Tong
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Bimeng Jie
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Xuebing Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | | | | | - Yang He
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China.
| |
Collapse
|
47
|
Bao H, Cao J, Chen M, Chen M, Chen W, Chen X, Chen Y, Chen Y, Chen Y, Chen Z, Chhetri JK, Ding Y, Feng J, Guo J, Guo M, He C, Jia Y, Jiang H, Jing Y, Li D, Li J, Li J, Liang Q, Liang R, Liu F, Liu X, Liu Z, Luo OJ, Lv J, Ma J, Mao K, Nie J, Qiao X, Sun X, Tang X, Wang J, Wang Q, Wang S, Wang X, Wang Y, Wang Y, Wu R, Xia K, Xiao FH, Xu L, Xu Y, Yan H, Yang L, Yang R, Yang Y, Ying Y, Zhang L, Zhang W, Zhang W, Zhang X, Zhang Z, Zhou M, Zhou R, Zhu Q, Zhu Z, Cao F, Cao Z, Chan P, Chen C, Chen G, Chen HZ, Chen J, Ci W, Ding BS, Ding Q, Gao F, Han JDJ, Huang K, Ju Z, Kong QP, Li J, Li J, Li X, Liu B, Liu F, Liu L, Liu Q, Liu Q, Liu X, Liu Y, Luo X, Ma S, Ma X, Mao Z, Nie J, Peng Y, Qu J, Ren J, Ren R, Song M, Songyang Z, Sun YE, Sun Y, Tian M, Wang S, Wang S, Wang X, Wang X, Wang YJ, Wang Y, Wong CCL, Xiang AP, Xiao Y, Xie Z, Xu D, Ye J, Yue R, Zhang C, Zhang H, Zhang L, Zhang W, Zhang Y, Zhang YW, Zhang Z, Zhao T, Zhao Y, Zhu D, Zou W, Pei G, Liu GH. Biomarkers of aging. SCIENCE CHINA. LIFE SCIENCES 2023; 66:893-1066. [PMID: 37076725 PMCID: PMC10115486 DOI: 10.1007/s11427-023-2305-0] [Citation(s) in RCA: 77] [Impact Index Per Article: 77.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 02/27/2023] [Indexed: 04/21/2023]
Abstract
Aging biomarkers are a combination of biological parameters to (i) assess age-related changes, (ii) track the physiological aging process, and (iii) predict the transition into a pathological status. Although a broad spectrum of aging biomarkers has been developed, their potential uses and limitations remain poorly characterized. An immediate goal of biomarkers is to help us answer the following three fundamental questions in aging research: How old are we? Why do we get old? And how can we age slower? This review aims to address this need. Here, we summarize our current knowledge of biomarkers developed for cellular, organ, and organismal levels of aging, comprising six pillars: physiological characteristics, medical imaging, histological features, cellular alterations, molecular changes, and secretory factors. To fulfill all these requisites, we propose that aging biomarkers should qualify for being specific, systemic, and clinically relevant.
Collapse
Affiliation(s)
- Hainan Bao
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China
| | - Jiani Cao
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Mengting Chen
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, 410008, China
- Hunan Key Laboratory of Aging Biology, Xiangya Hospital, Central South University, Changsha, 410008, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Min Chen
- Clinic Center of Human Gene Research, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Hubei Clinical Research Center of Metabolic and Cardiovascular Disease, Huazhong University of Science and Technology, Wuhan, 430022, China
- Hubei Key Laboratory of Metabolic Abnormalities and Vascular Aging, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Wei Chen
- Stem Cell Translational Research Center, Tongji Hospital, Tongji University School of Medicine, Shanghai, 200065, China
| | - Xiao Chen
- Department of Nuclear Medicine, Daping Hospital, Third Military Medical University, Chongqing, 400042, China
| | - Yanhao Chen
- CAS Key Laboratory of Nutrition, Metabolism and Food Safety, Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Yu Chen
- Shanghai Key Laboratory of Maternal Fetal Medicine, Clinical and Translational Research Center of Shanghai First Maternity and Infant Hospital, Frontier Science Center for Stem Cell Research, Shanghai Key Laboratory of Signaling and Disease Research, School of Life Sciences and Technology, Tongji University, Shanghai, 200092, China
| | - Yutian Chen
- The Department of Endovascular Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Zhiyang Chen
- Key Laboratory of Regenerative Medicine of Ministry of Education, Institute of Ageing and Regenerative Medicine, Jinan University, Guangzhou, 510632, China
| | - Jagadish K Chhetri
- National Clinical Research Center for Geriatric Diseases, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China
| | - Yingjie Ding
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Junlin Feng
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Jun Guo
- The Key Laboratory of Geriatrics, Beijing Institute of Geriatrics, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing Hospital/National Center of Gerontology of National Health Commission, Beijing, 100730, China
| | - Mengmeng Guo
- School of Pharmaceutical Sciences, Tsinghua University, Beijing, 100084, China
| | - Chuting He
- University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China
| | - Yujuan Jia
- Department of Neurology, First Affiliated Hospital, Shanxi Medical University, Taiyuan, 030001, China
| | - Haiping Jiang
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China
| | - Ying Jing
- Beijing Municipal Geriatric Medical Research Center, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China
- Aging Translational Medicine Center, International Center for Aging and Cancer, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China
- Advanced Innovation Center for Human Brain Protection, and National Clinical Research Center for Geriatric Disorders, Xuanwu Hospital Capital Medical University, Beijing, 100053, China
| | - Dingfeng Li
- Department of Neurology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, China
| | - Jiaming Li
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jingyi Li
- University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China
| | - Qinhao Liang
- College of Life Sciences, TaiKang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430072, China
| | - Rui Liang
- Research Institute of Transplant Medicine, Organ Transplant Center, NHC Key Laboratory for Critical Care Medicine, Tianjin First Central Hospital, Nankai University, Tianjin, 300384, China
| | - Feng Liu
- MOE Key Laboratory of Gene Function and Regulation, Guangzhou Key Laboratory of Healthy Aging Research, School of Life Sciences, Institute of Healthy Aging Research, Sun Yat-sen University, Guangzhou, 510275, China
| | - Xiaoqian Liu
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China
| | - Zuojun Liu
- School of Life Sciences, Hainan University, Haikou, 570228, China
| | - Oscar Junhong Luo
- Department of Systems Biomedical Sciences, School of Medicine, Jinan University, Guangzhou, 510632, China
| | - Jianwei Lv
- School of Life Sciences, Xiamen University, Xiamen, 361102, China
| | - Jingyi Ma
- The State Key Laboratory of Organ Failure Research, National Clinical Research Center of Kidney Disease, Division of Nephrology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Kehang Mao
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Center for Quantitative Biology (CQB), Peking University, Beijing, 100871, China
| | - Jiawei Nie
- Shanghai Institute of Hematology, State Key Laboratory for Medical Genomics, National Research Center for Translational Medicine (Shanghai), International Center for Aging and Cancer, Collaborative Innovation Center of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xinhua Qiao
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, China
| | - Xinpei Sun
- Peking University International Cancer Institute, Health Science Center, Peking University, Beijing, 100101, China
| | - Xiaoqiang Tang
- Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE, State Key Laboratory of Biotherapy, West China Second University Hospital, Sichuan University, Chengdu, 610041, China
| | - Jianfang Wang
- Institute for Regenerative Medicine, Shanghai East Hospital, Frontier Science Center for Stem Cell Research, Shanghai Key Laboratory of Signaling and Disease Research, School of Life Sciences and Technology, Tongji University, Shanghai, 200092, China
| | - Qiaoran Wang
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Siyuan Wang
- Clinical Research Institute, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100730, China
| | - Xuan Wang
- Hepatobiliary and Pancreatic Center, Medical Research Center, Beijing Tsinghua Changgung Hospital, Beijing, 102218, China
| | - Yaning Wang
- Key Laboratory for Stem Cells and Tissue Engineering, Ministry of Education, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
- Advanced Medical Technology Center, The First Affiliated Hospital, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yuhan Wang
- University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China
| | - Rimo Wu
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, 510005, China
| | - Kai Xia
- Center for Stem Cell Biologyand Tissue Engineering, Key Laboratory for Stem Cells and Tissue Engineering, Ministry of Education, Sun Yat-sen University, Guangzhou, 510080, China
- National-Local Joint Engineering Research Center for Stem Cells and Regenerative Medicine, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| | - Fu-Hui Xiao
- CAS Center for Excellence in Animal Evolution and Genetics, Chinese Academy of Sciences, Kunming, 650223, China
- State Key Laboratory of Genetic Resources and Evolution, Key Laboratory of Healthy Aging Research of Yunnan Province, Kunming Key Laboratory of Healthy Aging Study, KIZ/CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, 650223, China
| | - Lingyan Xu
- Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China
| | - Yingying Xu
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China
| | - Haoteng Yan
- Beijing Municipal Geriatric Medical Research Center, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China
- Aging Translational Medicine Center, International Center for Aging and Cancer, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China
- Advanced Innovation Center for Human Brain Protection, and National Clinical Research Center for Geriatric Disorders, Xuanwu Hospital Capital Medical University, Beijing, 100053, China
| | - Liang Yang
- CAS Key Laboratory of Regenerative Biology, Joint School of Life Sciences, Guangzhou Institutes of Biomedicine and Health, Chinese Academy of Sciences, Guangzhou Medical University, Guangzhou, 510530, China
| | - Ruici Yang
- State Key Laboratory of Cell Biology, Shanghai Institute of Biochemistry and Cell Biology, Center for Excellence in Molecular Cell Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China
| | - Yuanxin Yang
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai, 201210, China
| | - Yilin Ying
- Department of Geriatrics, Medical Center on Aging of Shanghai Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
- International Laboratory in Hematology and Cancer, Shanghai Jiao Tong University School of Medicine/Ruijin Hospital, Shanghai, 200025, China
| | - Le Zhang
- Gerontology Center of Hubei Province, Wuhan, 430000, China
- Institute of Gerontology, Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Weiwei Zhang
- Department of Cardiology, The Second Medical Centre, Chinese PLA General Hospital, National Clinical Research Center for Geriatric Diseases, Beijing, 100853, China
| | - Wenwan Zhang
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Xing Zhang
- Key Laboratory of Ministry of Education, School of Aerospace Medicine, Fourth Military Medical University, Xi'an, 710032, China
| | - Zhuo Zhang
- Optogenetics & Synthetic Biology Interdisciplinary Research Center, State Key Laboratory of Bioreactor Engineering, Shanghai Frontiers Science Center of Optogenetic Techniques for Cell Metabolism, School of Pharmacy, East China University of Science and Technology, Shanghai, 200237, China
- Research Unit of New Techniques for Live-cell Metabolic Imaging, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Min Zhou
- Department of Endocrinology, Endocrinology Research Center, Xiangya Hospital of Central South University, Changsha, 410008, China
| | - Rui Zhou
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Qingchen Zhu
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Zhengmao Zhu
- Department of Genetics and Cell Biology, College of Life Science, Nankai University, Tianjin, 300071, China
- Haihe Laboratory of Cell Ecosystem, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, 300020, China
| | - Feng Cao
- Department of Cardiology, The Second Medical Centre, Chinese PLA General Hospital, National Clinical Research Center for Geriatric Diseases, Beijing, 100853, China.
| | - Zhongwei Cao
- State Key Laboratory of Biotherapy, West China Second University Hospital, Sichuan University, Chengdu, 610041, China.
| | - Piu Chan
- National Clinical Research Center for Geriatric Diseases, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China.
| | - Chang Chen
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Guobing Chen
- Department of Microbiology and Immunology, School of Medicine, Jinan University, Guangzhou, 510632, China.
- Guangdong-Hong Kong-Macau Great Bay Area Geroscience Joint Laboratory, Guangzhou, 510000, China.
| | - Hou-Zao Chen
- Department of Biochemistryand Molecular Biology, State Key Laboratory of Medical Molecular Biology, Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100005, China.
| | - Jun Chen
- Peking University Research Center on Aging, Beijing Key Laboratory of Protein Posttranslational Modifications and Cell Function, Department of Biochemistry and Molecular Biology, Department of Integration of Chinese and Western Medicine, School of Basic Medical Science, Peking University, Beijing, 100191, China.
| | - Weimin Ci
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China.
| | - Bi-Sen Ding
- State Key Laboratory of Biotherapy, West China Second University Hospital, Sichuan University, Chengdu, 610041, China.
| | - Qiurong Ding
- CAS Key Laboratory of Nutrition, Metabolism and Food Safety, Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai, 200031, China.
| | - Feng Gao
- Key Laboratory of Ministry of Education, School of Aerospace Medicine, Fourth Military Medical University, Xi'an, 710032, China.
| | - Jing-Dong J Han
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Center for Quantitative Biology (CQB), Peking University, Beijing, 100871, China.
| | - Kai Huang
- Clinic Center of Human Gene Research, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
- Hubei Clinical Research Center of Metabolic and Cardiovascular Disease, Huazhong University of Science and Technology, Wuhan, 430022, China.
- Hubei Key Laboratory of Metabolic Abnormalities and Vascular Aging, Huazhong University of Science and Technology, Wuhan, 430022, China.
- Department of Cardiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
| | - Zhenyu Ju
- Key Laboratory of Regenerative Medicine of Ministry of Education, Institute of Ageing and Regenerative Medicine, Jinan University, Guangzhou, 510632, China.
| | - Qing-Peng Kong
- CAS Center for Excellence in Animal Evolution and Genetics, Chinese Academy of Sciences, Kunming, 650223, China.
- State Key Laboratory of Genetic Resources and Evolution, Key Laboratory of Healthy Aging Research of Yunnan Province, Kunming Key Laboratory of Healthy Aging Study, KIZ/CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, 650223, China.
| | - Ji Li
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, 410008, China.
- Hunan Key Laboratory of Aging Biology, Xiangya Hospital, Central South University, Changsha, 410008, China.
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, 410008, China.
| | - Jian Li
- The Key Laboratory of Geriatrics, Beijing Institute of Geriatrics, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing Hospital/National Center of Gerontology of National Health Commission, Beijing, 100730, China.
| | - Xin Li
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Baohua Liu
- School of Basic Medical Sciences, Shenzhen University Medical School, Shenzhen, 518060, China.
| | - Feng Liu
- Metabolic Syndrome Research Center, The Second Xiangya Hospital, Central South Unversity, Changsha, 410011, China.
| | - Lin Liu
- Department of Genetics and Cell Biology, College of Life Science, Nankai University, Tianjin, 300071, China.
- Haihe Laboratory of Cell Ecosystem, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, 300020, China.
- Institute of Translational Medicine, Tianjin Union Medical Center, Nankai University, Tianjin, 300000, China.
- State Key Laboratory of Medicinal Chemical Biology, Nankai University, Tianjin, 300350, China.
| | - Qiang Liu
- Department of Neurology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, China.
| | - Qiang Liu
- Department of Neurology, Tianjin Neurological Institute, Tianjin Medical University General Hospital, Tianjin, 300052, China.
- Tianjin Institute of Immunology, Tianjin Medical University, Tianjin, 300070, China.
| | - Xingguo Liu
- CAS Key Laboratory of Regenerative Biology, Joint School of Life Sciences, Guangzhou Institutes of Biomedicine and Health, Chinese Academy of Sciences, Guangzhou Medical University, Guangzhou, 510530, China.
| | - Yong Liu
- College of Life Sciences, TaiKang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430072, China.
| | - Xianghang Luo
- Department of Endocrinology, Endocrinology Research Center, Xiangya Hospital of Central South University, Changsha, 410008, China.
| | - Shuai Ma
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Xinran Ma
- Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China.
| | - Zhiyong Mao
- Shanghai Key Laboratory of Maternal Fetal Medicine, Clinical and Translational Research Center of Shanghai First Maternity and Infant Hospital, Frontier Science Center for Stem Cell Research, Shanghai Key Laboratory of Signaling and Disease Research, School of Life Sciences and Technology, Tongji University, Shanghai, 200092, China.
| | - Jing Nie
- The State Key Laboratory of Organ Failure Research, National Clinical Research Center of Kidney Disease, Division of Nephrology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China.
| | - Yaojin Peng
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Jing Qu
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Jie Ren
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Ruibao Ren
- Shanghai Institute of Hematology, State Key Laboratory for Medical Genomics, National Research Center for Translational Medicine (Shanghai), International Center for Aging and Cancer, Collaborative Innovation Center of Hematology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
- International Center for Aging and Cancer, Hainan Medical University, Haikou, 571199, China.
| | - Moshi Song
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Zhou Songyang
- MOE Key Laboratory of Gene Function and Regulation, Guangzhou Key Laboratory of Healthy Aging Research, School of Life Sciences, Institute of Healthy Aging Research, Sun Yat-sen University, Guangzhou, 510275, China.
- Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, 510120, China.
| | - Yi Eve Sun
- Stem Cell Translational Research Center, Tongji Hospital, Tongji University School of Medicine, Shanghai, 200065, China.
| | - Yu Sun
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China.
- Department of Medicine and VAPSHCS, University of Washington, Seattle, WA, 98195, USA.
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, 201203, China.
| | - Shusen Wang
- Research Institute of Transplant Medicine, Organ Transplant Center, NHC Key Laboratory for Critical Care Medicine, Tianjin First Central Hospital, Nankai University, Tianjin, 300384, China.
| | - Si Wang
- Beijing Municipal Geriatric Medical Research Center, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China.
- Aging Translational Medicine Center, International Center for Aging and Cancer, Xuanwu Hospital, Capital Medical University, Beijing, 100053, China.
- Advanced Innovation Center for Human Brain Protection, and National Clinical Research Center for Geriatric Disorders, Xuanwu Hospital Capital Medical University, Beijing, 100053, China.
| | - Xia Wang
- School of Pharmaceutical Sciences, Tsinghua University, Beijing, 100084, China.
| | - Xiaoning Wang
- Institute of Geriatrics, The second Medical Center, Beijing Key Laboratory of Aging and Geriatrics, National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, 100853, China.
| | - Yan-Jiang Wang
- Department of Neurology and Center for Clinical Neuroscience, Daping Hospital, Third Military Medical University, Chongqing, 400042, China.
| | - Yunfang Wang
- Hepatobiliary and Pancreatic Center, Medical Research Center, Beijing Tsinghua Changgung Hospital, Beijing, 102218, China.
| | - Catherine C L Wong
- Clinical Research Institute, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100730, China.
| | - Andy Peng Xiang
- Center for Stem Cell Biologyand Tissue Engineering, Key Laboratory for Stem Cells and Tissue Engineering, Ministry of Education, Sun Yat-sen University, Guangzhou, 510080, China.
- National-Local Joint Engineering Research Center for Stem Cells and Regenerative Medicine, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China.
| | - Yichuan Xiao
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China.
| | - Zhengwei Xie
- Peking University International Cancer Institute, Health Science Center, Peking University, Beijing, 100101, China.
- Beijing & Qingdao Langu Pharmaceutical R&D Platform, Beijing Gigaceuticals Tech. Co. Ltd., Beijing, 100101, China.
| | - Daichao Xu
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai, 201210, China.
| | - Jing Ye
- Department of Geriatrics, Medical Center on Aging of Shanghai Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
- International Laboratory in Hematology and Cancer, Shanghai Jiao Tong University School of Medicine/Ruijin Hospital, Shanghai, 200025, China.
| | - Rui Yue
- Institute for Regenerative Medicine, Shanghai East Hospital, Frontier Science Center for Stem Cell Research, Shanghai Key Laboratory of Signaling and Disease Research, School of Life Sciences and Technology, Tongji University, Shanghai, 200092, China.
| | - Cuntai Zhang
- Gerontology Center of Hubei Province, Wuhan, 430000, China.
- Institute of Gerontology, Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China.
| | - Hongbo Zhang
- Key Laboratory for Stem Cells and Tissue Engineering, Ministry of Education, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China.
- Advanced Medical Technology Center, The First Affiliated Hospital, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China.
| | - Liang Zhang
- CAS Key Laboratory of Tissue Microenvironment and Tumor, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Weiqi Zhang
- CAS Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Yong Zhang
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, 510005, China.
- The State Key Laboratory of Medical Molecular Biology, Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China.
| | - Yun-Wu Zhang
- Fujian Provincial Key Laboratory of Neurodegenerative Disease and Aging Research, Institute of Neuroscience, School of Medicine, Xiamen University, Xiamen, 361102, China.
| | - Zhuohua Zhang
- Key Laboratory of Molecular Precision Medicine of Hunan Province and Center for Medical Genetics, Institute of Molecular Precision Medicine, Xiangya Hospital, Central South University, Changsha, 410078, China.
- Department of Neurosciences, Hengyang Medical School, University of South China, Hengyang, 421001, China.
| | - Tongbiao Zhao
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
| | - Yuzheng Zhao
- Optogenetics & Synthetic Biology Interdisciplinary Research Center, State Key Laboratory of Bioreactor Engineering, Shanghai Frontiers Science Center of Optogenetic Techniques for Cell Metabolism, School of Pharmacy, East China University of Science and Technology, Shanghai, 200237, China.
- Research Unit of New Techniques for Live-cell Metabolic Imaging, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| | - Dahai Zhu
- Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, 510005, China.
- The State Key Laboratory of Medical Molecular Biology, Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and School of Basic Medicine, Peking Union Medical College, Beijing, 100005, China.
| | - Weiguo Zou
- State Key Laboratory of Cell Biology, Shanghai Institute of Biochemistry and Cell Biology, Center for Excellence in Molecular Cell Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China.
| | - Gang Pei
- Shanghai Key Laboratory of Signaling and Disease Research, Laboratory of Receptor-Based Biomedicine, The Collaborative Innovation Center for Brain Science, School of Life Sciences and Technology, Tongji University, Shanghai, 200070, China.
| | - Guang-Hui Liu
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, 100101, China.
- Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China.
- Beijing Institute for Stem Cell and Regenerative Medicine, Beijing, 100101, China.
- Advanced Innovation Center for Human Brain Protection, and National Clinical Research Center for Geriatric Disorders, Xuanwu Hospital Capital Medical University, Beijing, 100053, China.
| |
Collapse
|
48
|
Yeh TC, Chen SJ, Chou YB, Luo AC, Deng YS, Lee YH, Chang PH, Lin CJ, Tai MC, Chen YC, Ko YC. PREDICTING VISUAL OUTCOME AFTER SURGERY IN PATIENTS WITH IDIOPATHIC EPIRETINAL MEMBRANE USING A NOVEL CONVOLUTIONAL NEURAL NETWORK. Retina 2023; 43:767-774. [PMID: 36727822 DOI: 10.1097/iae.0000000000003714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a deep convolutional neural network that enables the prediction of postoperative visual outcomes after epiretinal membrane surgery based on preoperative optical coherence tomography images and clinical parameters to refine surgical decision making. METHODS A total of 529 patients with idiopathic epiretinal membrane who underwent standard vitrectomy with epiretinal membrane peeling surgery by two surgeons between January 1, 2014, and June 1, 2020, were enrolled. The newly developed Heterogeneous Data Fusion Net was introduced to predict postoperative visual acuity outcomes (improvement ≥2 lines in Snellen chart) 12 months after surgery based on preoperative cross-sectional optical coherence tomography images and clinical factors, including age, sex, and preoperative visual acuity. The predictive accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network model were evaluated. RESULTS The developed model demonstrated an overall accuracy for visual outcome prediction of 88.68% (95% CI, 79.0%-95.7%) with an area under the receiver operating characteristic curve of 97.8% (95% CI, 86.8%-98.0%), sensitivity of 87.0% (95% CI, 67.9%-95.5%), specificity of 92.9% (95% CI, 77.4%-98.0%), precision of 0.909, recall of 0.870, and F1 score of 0.889. The heatmaps identified the critical area for prediction as the ellipsoid zone of photoreceptors and the superficial retina, which was subjected to tangential traction of the proliferative membrane. CONCLUSION The novel Heterogeneous Data Fusion Net demonstrated high accuracy in the automated prediction of visual outcomes after weighing and leveraging multiple clinical parameters, including optical coherence tomography images. This approach may be helpful in establishing personalized therapeutic strategies for epiretinal membrane management.
Collapse
Affiliation(s)
- Tsai-Chu Yeh
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - An-Chun Luo
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Shan Deng
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Hsien Lee
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Po-Han Chang
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Chun-Ju Lin
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Ming-Chi Tai
- Industrial Technology Research Institute, Taipei City, Taiwan
- Department of Materials Science and Engineering, National Tsing-Hua University, Taipei City, Taiwan; and
| | - Ying-Chi Chen
- Division of Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan
| | - Yu-Chieh Ko
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| |
Collapse
|
49
|
Dolar-Szczasny J, Barańska A, Rejdak R. Evaluating the Efficacy of Teleophthalmology in Delivering Ophthalmic Care to Underserved Populations: A Literature Review. J Clin Med 2023; 12:jcm12093161. [PMID: 37176602 PMCID: PMC10179149 DOI: 10.3390/jcm12093161] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Technological advancement has brought commendable changes in medicine, advancing diagnosis, treatment, and interventions. Telemedicine has been adopted by various subspecialties including ophthalmology. Over the years, teleophthalmology has been implemented in various countries, and continuous progress is being made in this area. In underserved populations, due to socioeconomic factors, there is little or no access to healthcare facilities, and people are at higher risk of eye diseases and vision impairment. Transportation is the major hurdle for these people in obtaining access to eye care in the main hospitals. There is a dire need for accessible eye care for such populations, and teleophthalmology is the ray of hope for providing eye care facilities to underserved people. Numerous studies have reported the advantages of teleophthalmology for rural populations such as being cost-effective, timesaving, reliable, efficient, and satisfactory for patients. Although it is being practiced in urban populations, for rural populations, its benefits amplify. However, there are certain obstacles as well, such as the cost of equipment, lack of steady electricity and internet supply in rural areas, and the attitude of people in certain regions toward acceptance of teleophthalmology. In this review, we have discussed in detail eye health in rural populations, teleophthalmology, and its effectiveness in rural populations of different countries.
Collapse
Affiliation(s)
- Joanna Dolar-Szczasny
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| | - Agnieszka Barańska
- Department of Medical Informatics and Statistics with E-Learning Laboratory, Medical University of Lublin, 20-090 Lublin, Poland
| | - Robert Rejdak
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| |
Collapse
|
50
|
Cao S, Zhang R, Jiang A, Kuerban M, Wumaier A, Wu J, Xie K, Aizezi M, Tuersun A, Liang X, Chen R. Application effect of an artificial intelligence-based fundus screening system: evaluation in a clinical setting and population screening. Biomed Eng Online 2023; 22:38. [PMID: 37095516 PMCID: PMC10127070 DOI: 10.1186/s12938-023-01097-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/24/2023] [Indexed: 04/26/2023] Open
Abstract
BACKGROUND To investigate the application effect of artificial intelligence (AI)-based fundus screening system in real-world clinical environment. METHODS A total of 637 color fundus images were included in the analysis of the application of the AI-based fundus screening system in the clinical environment and 20,355 images were analyzed in the population screening. RESULTS The AI-based fundus screening system demonstrated superior diagnostic effectiveness for diabetic retinopathy (DR), retinal vein occlusion (RVO) and pathological myopia (PM) according to gold standard referral. The sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of three fundus abnormalities were greater (all > 80%) than those for age-related macular degeneration (ARMD), referable glaucoma and other abnormalities. The percentages of different diagnostic conditions were similar in both the clinical environment and the population screening. CONCLUSIONS In a real-world setting, our AI-based fundus screening system could detect 7 conditions, with better performance for DR, RVO and PM. Testing in the clinical environment and through population screening demonstrated the clinical utility of our AI-based fundus screening system in the early detection of ocular fundus abnormalities and the prevention of blindness.
Collapse
Affiliation(s)
- Shujuan Cao
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Rongpei Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aixin Jiang
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mayila Kuerban
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aizezi Wumaier
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Jianhua Wu
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Kaihua Xie
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mireayi Aizezi
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Abudurexiti Tuersun
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Xuanwei Liang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| | - Rongxin Chen
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| |
Collapse
|