1
|
Sobhi N, Sadeghi-Bazargani Y, Mirzaei M, Abdollahi M, Jafarizadeh A, Pedrammehr S, Alizadehsani R, Tan RS, Islam SMS, Acharya UR. Artificial intelligence for early detection of diabetes mellitus complications via retinal imaging. J Diabetes Metab Disord 2025; 24:104. [PMID: 40224528 PMCID: PMC11993533 DOI: 10.1007/s40200-025-01596-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 02/23/2025] [Indexed: 04/15/2025]
Abstract
Background Diabetes mellitus (DM) increases the risk of vascular complications, and retinal vasculature imaging serves as a valuable indicator of both microvascular and macrovascular health. Moreover, artificial intelligence (AI)-enabled systems developed for high-throughput detection of diabetic retinopathy (DR) using digitized retinal images have become clinically adopted. This study reviews AI applications using retinal images for DM-related complications, highlighting advancements beyond DR screening, diagnosis, and prognosis, and addresses implementation challenges, such as ethics, data privacy, equitable access, and explainability. Methods We conducted a thorough literature search across several databases, including PubMed, Scopus, and Web of Science, focusing on studies involving diabetes, the retina, and artificial intelligence. We reviewed the original research based on their methodology, AI algorithms, data processing techniques, and validation procedures to ensure a detailed analysis of AI applications in diabetic retinal imaging. Results Retinal images can be used to diagnose DM complications including DR, neuropathy, nephropathy, and atherosclerotic cardiovascular disease, as well as to predict the risk of cardiovascular events. Beyond DR screening, AI integration also offers significant potential to address the challenges in the comprehensive care of patients with DM. Conclusion With the ability to evaluate the patient's health status in relation to DM complications as well as risk prognostication of future cardiovascular complications, AI-assisted retinal image analysis has the potential to become a central tool for modern personalized medicine in patients with DM.
Collapse
Affiliation(s)
- Navid Sobhi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | | | - Majid Mirzaei
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mirsaeed Abdollahi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Siamak Pedrammehr
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, 75 Pigdons Rd, Waurn Ponds, VIC 3216 Australia
- Faculty of Design, Tabriz Islamic Art University, Tabriz, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, 75 Pigdons Rd, Waurn Ponds, VIC 3216 Australia
| | - Ru-San Tan
- National Heart Centre Singapore, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Sheikh Mohammed Shariful Islam
- Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, Melbourne, VIC Australia
- Cardiovascular Division, The George Institute for Global Health, Newtown, Australia
- Sydney Medical School, University of Sydney, Camperdown, Australia
| | - U. Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD 4300 Australia
- Centre for Health Research, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
2
|
Casazza M, Bolz M, Huemer J. Telemedicine in ophthalmology. Wien Med Wochenschr 2025; 175:153-161. [PMID: 40227513 DOI: 10.1007/s10354-025-01081-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 02/20/2025] [Indexed: 04/15/2025]
Abstract
Since its beginnings in the 1970s, telemedicine has advanced extensively. Telemedicine is now more accessible and powerful than ever thanks to developments in medical imaging, Internet accessibility, advancements in telecommunications infrastructure, exponential growth in computing power, and related computer-aided diagnoses. This is especially true in the field of ophthalmology. With the COVID 19 pandemic serving as a catalyst for the widespread adoption and acceptance of teleophthalmology, new models of healthcare provision integrating telemedicine are needed to meet the challenges of the modern world. The demand for ophthalmic services is growing globally due to population growth, aging, and a shortage of ophthalmologists. In this review, we discuss the development and use of telemedicine in the field of ophthalmology and shed light on the benefits and drawbacks of teleophthalmology.
Collapse
Affiliation(s)
- Marina Casazza
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria
| | - Matthias Bolz
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria
| | - Josef Huemer
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
| |
Collapse
|
3
|
Riedl S, Birner K, Schmidt-Erfurth U. Artificial intelligence in managing retinal disease-current concepts and relevant aspects for health care providers. Wien Med Wochenschr 2025; 175:143-152. [PMID: 39992600 DOI: 10.1007/s10354-024-01069-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Accepted: 12/18/2024] [Indexed: 02/26/2025]
Abstract
Given how the diagnosis and management of many ocular and, most specifically, retinal diseases heavily rely on various imaging modalities, the introduction of artificial intelligence (AI) into this field has been a logical, inevitable, and successful development in recent decades. The field of retinal diseases has practically become a showcase for the use of AI in medicine. In this article, after providing a short overview of the most relevant retinal diseases and their socioeconomic impact, we highlight various aspects of how AI can be applied in research, diagnosis, and disease management and how this is expected to alter patient flows, affecting also health care professionals beyond ophthalmologists.
Collapse
Affiliation(s)
- Sophie Riedl
- Department of Ophthalmology and Optometry, Laboratory of Ophthalmic Image Analysis, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Klaudia Birner
- Department of Ophthalmology and Optometry, Laboratory of Ophthalmic Image Analysis, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Laboratory of Ophthalmic Image Analysis, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
4
|
Irodi A, Zhu Z, Grzybowski A, Wu Y, Cheung CY, Li H, Tan G, Wong TY. The evolution of diabetic retinopathy screening. Eye (Lond) 2025; 39:1040-1046. [PMID: 39910282 PMCID: PMC11978858 DOI: 10.1038/s41433-025-03633-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 01/06/2025] [Accepted: 01/22/2025] [Indexed: 02/07/2025] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of preventable blindness and has emerged as a global health challenge, necessitating the development of robust management strategies. As DR prevalence continues to rise, advancements in screening methods have become increasingly critical for timely detection and intervention. This review examines three key advancements in DR screening: a shift from specialist to generalist approach, the adoption of telemedicine strategies for expanded access and enhanced efficiency, and the integration of artificial intelligence (AI). In particular, AI offers unprecedented benefits in the form of sustainability and scalability for not only DR screening but other aspects of eye health and the medical field as a whole. Though there remain barriers to address, AI holds vast potential for reshaping DR screening and significantly improving patient outcomes globally.
Collapse
Affiliation(s)
- Anushka Irodi
- School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Yilan Wu
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Huating Li
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Shanghai, China
| | - Gavin Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Beijing Visual Science and Translational Eye Research Institute (BERI), School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
5
|
Zhu Z, Wang Y, Qi Z, Hu W, Zhang X, Wagner SK, Wang Y, Ran AR, Ong J, Waisberg E, Masalkhi M, Suh A, Tham YC, Cheung CY, Yang X, Yu H, Ge Z, Wang W, Sheng B, Liu Y, Lee AG, Denniston AK, Wijngaarden PV, Keane PA, Cheng CY, He M, Wong TY. Oculomics: Current concepts and evidence. Prog Retin Eye Res 2025; 106:101350. [PMID: 40049544 DOI: 10.1016/j.preteyeres.2025.101350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 03/03/2025] [Accepted: 03/03/2025] [Indexed: 03/20/2025]
Abstract
The eye provides novel insights into general health, as well as pathogenesis and development of systemic diseases. In the past decade, growing evidence has demonstrated that the eye's structure and function mirror multiple systemic health conditions, especially in cardiovascular diseases, neurodegenerative disorders, and kidney impairments. This has given rise to the field of oculomics-the application of ophthalmic biomarkers to understand mechanisms, detect and predict disease. The development of this field has been accelerated by three major advances: 1) the availability and widespread clinical adoption of high-resolution and non-invasive ophthalmic imaging ("hardware"); 2) the availability of large studies to interrogate associations ("big data"); 3) the development of novel analytical methods, including artificial intelligence (AI) ("software"). Oculomics offers an opportunity to enhance our understanding of the interplay between the eye and the body, while supporting development of innovative diagnostic, prognostic, and therapeutic tools. These advances have been further accelerated by developments in AI, coupled with large-scale linkage datasets linking ocular imaging data with systemic health data. Oculomics also enables the detection, screening, diagnosis, and monitoring of many systemic health conditions. Furthermore, oculomics with AI allows prediction of the risk of systemic diseases, enabling risk stratification, opening up new avenues for prevention or individualized risk prediction and prevention, facilitating personalized medicine. In this review, we summarise current concepts and evidence in the field of oculomics, highlighting the progress that has been made, remaining challenges, and the opportunities for future research.
Collapse
Affiliation(s)
- Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia.
| | - Yueye Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ziyi Qi
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - Xiayin Zhang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Yujie Wang
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, USA
| | - Ethan Waisberg
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Alex Suh
- Tulane University School of Medicine, New Orleans, LA, USA
| | - Yih Chung Tham
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaohong Yang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Zongyuan Ge
- Monash e-Research Center, Faculty of Engineering, Airdoc Research, Nvidia AI Technology Research Center, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Andrew G Lee
- Center for Space Medicine and the Department of Ophthalmology, Baylor College of Medicine, Houston, USA; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, USA; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, USA; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, USA; Department of Ophthalmology, University of Texas Medical Branch, Galveston, USA; University of Texas MD Anderson Cancer Center, Houston, USA; Texas A&M College of Medicine, Bryan, USA; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Alastair K Denniston
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre (BRC), University Hospital Birmingham and University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Florey Institute of Neuroscience and Mental Health, University of Melbourne, Parkville, VIC, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Ching-Yu Cheng
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
6
|
Yang Q, Bee YM, Lim CC, Sabanayagam C, Yim-Lui Cheung C, Wong TY, Ting DS, Lim LL, Li H, He M, Lee AY, Shaw AJ, Keong YK, Wei Tan GS. Use of artificial intelligence with retinal imaging in screening for diabetes-associated complications: systematic review. EClinicalMedicine 2025; 81:103089. [PMID: 40052065 PMCID: PMC11883405 DOI: 10.1016/j.eclinm.2025.103089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2024] [Revised: 12/30/2024] [Accepted: 01/16/2025] [Indexed: 03/09/2025] Open
Abstract
Background Artificial Intelligence (AI) has been used to automate detection of retinal diseases from retinal images with great success, in particular for screening for diabetic retinopathy, a major complication of diabetes. Since persons with diabetes routinely receive retinal imaging to evaluate their diabetic retinopathy status, AI-based retinal imaging may have potential to be used as an opportunistic comprehensive screening for multiple systemic micro- and macro-vascular complications of diabetes. Methods We conducted a qualitative systematic review on published literature using AI on retina images to detect systemic diabetes complications. We searched three main databases: PubMed, Google Scholar, and Web of Science (January 1, 2000, to October 1, 2024). Research that used AI to evaluate the associations between retinal images and diabetes-associated complications, or research involving diabetes patients with retinal imaging and AI systems were included. Our primary focus was on articles related to AI, retinal images, and diabetes-associated complications. We evaluated each study for the robustness of the studies by development of the AI algorithm, size and quality of the training dataset, internal validation and external testing, and the performance. Quality assessments were employed to ensure the inclusion of high-quality studies, and data extraction was conducted systematically to gather pertinent information for analysis. This study has been registered on PROSPERO under the registration ID CRD42023493512. Findings From a total of 337 abstracts, 38 studies were included. These studies covered a range of topics related to prediction of diabetes from pre-diabetes or non-diabeticindividuals (n = 4), diabetes related systemic risk factors (n = 10), detection of microvascular complications (n = 8) and detection of macrovascular complications (n = 17). Most studies (n = 32) utilized color fundus photographs (CFP) as retinal image modality, while others employed optical coherence tomography (OCT) (n = 6). The performance of the AI systems varied, with an AUC ranging from 0.676 to 0.971 in prediction or identification of different complications. Study designs included cross-sectional and cohort studies with sample sizes ranging from 100 to over 100,000 participants. Risk of bias was evaluated by using the Newcastle-Ottawa Scale and AXIS, with most studies scoring as low to moderate risk. Interpretation Our review highlights the potential for the use of AI algorithms applied to retina images, particularly CFP, to screen, predict, or diagnose the various microvascular and macrovascular complications of diabetes. However, we identified few studies with longitudinal data and a paucity of randomized control trials, reflecting a gap between the development of AI algorithms and real-world implementation and translational studies. Funding Dr. Gavin Siew Wei TAN is supported by: 1. DYNAMO: Diabetes studY on Nephropathy And other Microvascular cOmplications II supported by National Medical Research Council (MOH-001327-03): data collection, analysis, trial design 2. Prognositc significance of novel multimodal imaging markers for diabetic retinopathy: towards improving the staging for diabetic retinopathy supported by NMRC Clinician Scientist Award (CSA)-Investigator (INV) (MOH-001047-00).
Collapse
Affiliation(s)
- Qianhui Yang
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, China
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore
| | - Ciwei Cynthia Lim
- Department of Renal Medicine, Singapore General Hospital, Academia Level 3, 20 College Road, Singapore, 169856, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Carol Yim-Lui Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Tien Yin Wong
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, China
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Daniel S.W. Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, 50603, Malaysia
| | - HuaTing Li
- Department of Endocrinology and Metabolism, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yishan Road, Shanghai, 200233, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
| | - A Jonathan Shaw
- Department of Biology & L. E. Anderson Bryophyte Herbarium, Duke University, Durham, NC, USA
| | - Yeo Khung Keong
- Department of Cardiology, National Heart Centre Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| |
Collapse
|
7
|
Men Y, Fhima J, Celi LA, Ribeiro LZ, Nakayama LF, Behar JA. Deep learning generalization for diabetic retinopathy staging from fundus images. Physiol Meas 2025; 13:015001. [PMID: 39788077 DOI: 10.1088/1361-6579/ada86a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Accepted: 01/09/2025] [Indexed: 01/12/2025]
Abstract
Objective. Diabetic retinopathy (DR) is a serious diabetes complication that can lead to vision loss, making timely identification crucial. Existing data-driven algorithms for DR staging from digital fundus images (DFIs) often struggle with generalization due to distribution shifts between training and target domains.Approach. To address this, DRStageNet, a deep learning model, was developed using six public and independent datasets with 91 984 DFIs from diverse demographics. Five pretrained self-supervised vision transformers (ViTs) were benchmarked, with the best further trained using a multi-source domain (MSD) fine-tuning strategy.Main results. DINOv2 showed a 27.4% improvement in L-Kappa versus other pretrained ViT. MSD fine-tuning improved performance in four of five target domains. The error analysis revealing 60% of errors due to incorrect labels, 77.5% of which were correctly classified by DRStageNet.Significance. We developed DRStageNet, a DL model for DR, designed to accurately stage the condition while addressing the challenge of generalizing performance across target domains. The model and explainability heatmaps are available atwww.aimlab-technion.com/lirot-ai.
Collapse
Affiliation(s)
- Yevgeniy Men
- Andrew and Erna Viterbi Faculty of Electrical & Computer Engineering, Technion, Israel Institute of Technology, Haifa 3200003, Israel
- Faculty of Biomedical Engineering, Technion, Israel Institute of Technology, Haifa 3200003, Israel
| | - Jonathan Fhima
- Faculty of Biomedical Engineering, Technion, Israel Institute of Technology, Haifa 3200003, Israel
- Department of Applied Mathematics, Technion, Israel Institute of Technology, Haifa 3200003, Israel
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA 02139, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA 02215, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA 02215, United States of America
| | - Lucas Zago Ribeiro
- Ophthalmology department, São Paulo Federal University, Street, São Paulo 610101, Brazil
| | - Luis Filipe Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA 02139, United States of America
- Ophthalmology department, São Paulo Federal University, Street, São Paulo 610101, Brazil
| | - Joachim A Behar
- Faculty of Biomedical Engineering, Technion, Israel Institute of Technology, Haifa 3200003, Israel
| |
Collapse
|
8
|
Gholami S, Jannat FE, Thompson AC, Ong SSY, Lim JI, Leng T, Tabkhivayghan H, Alam MN. Distributed training of foundation models for ophthalmic diagnosis. COMMUNICATIONS ENGINEERING 2025; 4:6. [PMID: 39843622 PMCID: PMC11754456 DOI: 10.1038/s44172-025-00341-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 01/07/2025] [Indexed: 01/24/2025]
Abstract
Vision impairment affects nearly 2.2 billion people globally, and nearly half of these cases could be prevented with early diagnosis and intervention-underscoring the urgent need for reliable and scalable detection methods for conditions like diabetic retinopathy and age-related macular degeneration. Here we propose a distributed deep learning framework that integrates self-supervised and domain-adaptive federated learning to enhance the detection of eye diseases from optical coherence tomography images. We employed a self-supervised, mask-based pre-training strategy to develop a robust foundation encoder. This encoder was trained on seven optical coherence tomography datasets, and we compared its performance under local, centralized, and federated learning settings. Our results show that self-supervised methods-both centralized and federated-improved the area under the curve by at least 10% compared to local models. Additionally, incorporating domain adaptation into the federated learning framework further boosted performance and generalization across different populations and imaging conditions. This approach supports collaborative model development without data sharing, providing a scalable, privacy-preserving solution for effective retinal disease screening and diagnosis in diverse clinical settings.
Collapse
Affiliation(s)
- Sina Gholami
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - Fatema-E Jannat
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA
| | | | - Sally Shin Yee Ong
- Department of Ophthalmology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Science, University of ILlinois at Chicago, Chicago, IL, USA
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Stanford, CA, USA
| | - Hamed Tabkhivayghan
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA
| | - Minhaj Nur Alam
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC, USA.
| |
Collapse
|
9
|
Ejaz S, Zia HU, Majeed F, Shafique U, Altamiranda SC, Lipari V, Ashraf I. Fundus image classification using feature concatenation for early diagnosis of retinal disease. Digit Health 2025; 11:20552076251328120. [PMID: 40162178 PMCID: PMC11951903 DOI: 10.1177/20552076251328120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 02/04/2025] [Indexed: 04/02/2025] Open
Abstract
Background Deep learning models assist ophthalmologists in early detection of diseases from retinal images and timely treatment. Aim Owing to robust and accurate results from deep learning models, we aim to use convolutional neural network (CNN) to provide a non-invasive method for early detection of eye diseases. Methodology We used a hybridized CNN with deep learning (DL) based on two separate CNN blocks, to identify multiple Optic Disc Cupping, Diabetic Retinopathy, Media Haze, and Healthy images. We used the RFMiD dataset, which contains various categories of fundus images representing different eye diseases. Data augmenting, resizing, coping, and one-hot encoding are used among other preprocessing techniques to improve the performance of the proposed model. Color fundus images have been analyzed by CNNs to extract relevant features. Two CCN models that extract deep features are trained in parallel. To obtain more noticeable features, the gathered features are further fused utilizing the Canonical Correlation Analysis fusion approach. To assess the effectiveness, we employed eight classification algorithms: Gradient boosting, support vector machines, voting ensemble, medium KNN, Naive Bayes, COARSE- KNN, random forest, and fine KNN. Results With the greatest accuracy of 93.39%, the ensemble learning performed better than the other algorithms. Conclusion The accuracy rates suggest that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. It contributes to the field of eye disease detection through the analysis of color fundus images by providing a reliable and efficient diagnostic system.
Collapse
Affiliation(s)
- Sara Ejaz
- Department of Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Hafiz U Zia
- Department of Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Fiaz Majeed
- Department of Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Umair Shafique
- Department of Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Stefania Carvajal Altamiranda
- Universidad Europea del Atlantico, Santander, Spain
- Universidade Internacional do Cuanza, Cuito, Bie, Angola
- Fundacion Universitaria Internacional de Colombia, Bogota, Colombia
| | - Vivian Lipari
- Universidad Europea del Atlantico, Santander, Spain
- Universidad Internacional Iberoamericana, Campeche, Mexico
- Universidad de La Romana, La Romana, Republica Dominicana
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan South Korea
| |
Collapse
|
10
|
Abràmoff MD, Lavin PT, Jakubowski JR, Blodi BA, Keeys M, Joyce C, Folk JC. Mitigation of AI adoption bias through an improved autonomous AI system for diabetic retinal disease. NPJ Digit Med 2024; 7:369. [PMID: 39702673 DOI: 10.1038/s41746-024-01389-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 12/12/2024] [Indexed: 12/21/2024] Open
Abstract
Where adopted, Autonomous artificial Intelligence (AI) for Diabetic Retinal Disease (DRD) resolves longstanding racial, ethnic, and socioeconomic disparities, but AI adoption bias persists. This preregistered trial determined sensitivity and specificity of a previously FDA authorized AI, improved to compensate for lower contrast and smaller imaged area of a widely adopted, lower cost, handheld fundus camera (RetinaVue700, Baxter Healthcare, Deerfield, IL) to identify DRD in participants with diabetes without known DRD, in primary care. In 626 participants (1252 eyes) 50.8% male, 45.7% Hispanic, 17.3% Black, DRD prevalence was 29.0%, all prespecified non-inferiority endpoints were met and no racial, ethnic or sex bias was identified, against a Wisconsin Reading Center level I prognostic standard using widefield stereoscopic photography and macular Optical Coherence Tomography. Results suggest this improved autonomous AI system can mitigate AI adoption bias, while preserving safety and efficacy, potentially contributing to rapid scaling of health access equity. ClinicalTrials.gov NCT05808699 (3/29/2023).
Collapse
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA.
- Veterans Administration Medical Center, Iowa City, IA, USA.
- Digital Diagnostics, Inc., Coralville, IA, USA.
| | - Philip T Lavin
- Boston Biostatistics Research Foundation, Inc., Framingham, MA, USA
| | | | - Barbara A Blodi
- Department of Ophthalmology and Visual Sciences, Wisconsin Reading Center, University of Wisconsin, Madison, WI, USA
| | - Mia Keeys
- Department of Public Health, George Washington University, Washington, DC, USA
- Womens' Commissioner, Washington, DC, USA
| | - Cara Joyce
- Department of Medicine, Stritch School of Medicine, Loyola University Chicago, Chicago, IL, USA
| | - James C Folk
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
- Veterans Administration Medical Center, Iowa City, IA, USA
| |
Collapse
|
11
|
Goh JHL, Ang E, Srinivasan S, Lei X, Loh J, Quek TC, Xue C, Xu X, Liu Y, Cheng CY, Rajapakse JC, Tham YC. Comparative Analysis of Vision Transformers and Conventional Convolutional Neural Networks in Detecting Referable Diabetic Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100552. [PMID: 39165694 PMCID: PMC11334703 DOI: 10.1016/j.xops.2024.100552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/09/2024] [Accepted: 05/13/2024] [Indexed: 08/22/2024]
Abstract
Objective Vision transformers (ViTs) have shown promising performance in various classification tasks previously dominated by convolutional neural networks (CNNs). However, the performance of ViTs in referable diabetic retinopathy (DR) detection is relatively underexplored. In this study, using retinal photographs, we evaluated the comparative performances of ViTs and CNNs on detection of referable DR. Design Retrospective study. Participants A total of 48 269 retinal images from the open-source Kaggle DR detection dataset, the Messidor-1 dataset and the Singapore Epidemiology of Eye Diseases (SEED) study were included. Methods Using 41 614 retinal photographs from the Kaggle dataset, we developed 5 CNN (Visual Geometry Group 19, ResNet50, InceptionV3, DenseNet201, and EfficientNetV2S) and 4 ViTs models (VAN_small, CrossViT_small, ViT_small, and Hierarchical Vision transformer using Shifted Windows [SWIN]_tiny) for the detection of referable DR. We defined the presence of referable DR as eyes with moderate or worse DR. The comparative performance of all 9 models was evaluated in the Kaggle internal test dataset (with 1045 study eyes), and in 2 external test sets, the SEED study (5455 study eyes) and the Messidor-1 (1200 study eyes). Main Outcome Measures Area under operating characteristics curve (AUC), specificity, and sensitivity. Results Among all models, the SWIN transformer displayed the highest AUC of 95.7% on the internal test set, significantly outperforming the CNN models (all P < 0.001). The same observation was confirmed in the external test sets, with the SWIN transformer achieving AUC of 97.3% in SEED and 96.3% in Messidor-1. When specificity level was fixed at 80% for the internal test, the SWIN transformer achieved the highest sensitivity of 94.4%, significantly better than all the CNN models (sensitivity levels ranging between 76.3% and 83.8%; all P < 0.001). This trend was also consistently observed in both external test sets. Conclusions Our findings demonstrate that ViTs provide superior performance over CNNs in detecting referable DR from retinal photographs. These results point to the potential of utilizing ViT models to improve and optimize retinal photo-based deep learning for referable DR detection. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Elroy Ang
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Sahana Srinivasan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High-Performance Computing, A∗STAR, Singapore, Singapore
| | - Johnathan Loh
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Cancan Xue
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Xinxing Xu
- Institute of High-Performance Computing, A∗STAR, Singapore, Singapore
| | - Yong Liu
- Institute of High-Performance Computing, A∗STAR, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School Singapore, Singapore, Singapore
| | - Jagath C. Rajapakse
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School Singapore, Singapore, Singapore
| |
Collapse
|
12
|
Lepetit-Aimon G, Playout C, Boucher MC, Duval R, Brent MH, Cheriet F. MAPLES-DR: MESSIDOR Anatomical and Pathological Labels for Explainable Screening of Diabetic Retinopathy. Sci Data 2024; 11:914. [PMID: 39179588 PMCID: PMC11343847 DOI: 10.1038/s41597-024-03739-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 08/05/2024] [Indexed: 08/26/2024] Open
Abstract
Reliable automatic diagnosis of Diabetic Retinopathy (DR) and Macular Edema (ME) is an invaluable asset in improving the rate of monitored patients among at-risk populations and in enabling earlier treatments before the pathology progresses and threatens vision. However, the explainability of screening models is still an open question, and specifically designed datasets are required to support the research. We present MAPLES-DR (MESSIDOR Anatomical and Pathological Labels for Explainable Screening of Diabetic Retinopathy), which contains, for 198 images of the MESSIDOR public fundus dataset, new diagnoses for DR and ME as well as new pixel-wise segmentation maps for 10 anatomical and pathological biomarkers related to DR. This paper documents the design choices and the annotation procedure that produced MAPLES-DR, discusses the interobserver variability and the overall quality of the annotations, and provides guidelines on using the dataset in a machine learning context.
Collapse
Affiliation(s)
- Gabriel Lepetit-Aimon
- Department of Computer and Software Engineering, Polytechnique Montréal, Montréal, QC, Canada.
| | - Clément Playout
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Marie Carole Boucher
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Michael H Brent
- Department of Ophthalmology and Vision Science, University of Toronto, Toronto, Canada
| | - Farida Cheriet
- Department of Computer and Software Engineering, Polytechnique Montréal, Montréal, QC, Canada
| |
Collapse
|
13
|
Verbeek S, Dalvin LA. Advances in multimodal imaging for diagnosis of pigmented ocular fundus lesions. CANADIAN JOURNAL OF OPHTHALMOLOGY 2024; 59:218-233. [PMID: 37480939 PMCID: PMC10796850 DOI: 10.1016/j.jcjo.2023.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 05/25/2023] [Accepted: 07/06/2023] [Indexed: 07/24/2023]
Abstract
Pigmented ocular fundus lesions can range from benign to malignant. While observation is reasonable for asymptomatic benign lesions, early recognition of tumours that are vision or life threatening is critical for long-term prognosis. With recent advances and increased accessibility of multimodal imaging, it is important that providers understand how to best use these tools to detect tumours that require early referral to subspecialty centres. This review aims to provide an overview of pigmented ocular fundus lesions and their defining characteristics using multimodal imaging. We cover the spectrum of pigmented ocular fundus lesions, including freckle and focal aggregates of normal or near-normal uveal melanocytes, retinal pigment epithelium (RPE) hyperplasia, congenital hypertrophy of the RPE, RPE hamartoma associated with familial adenomatous polyposis, congenital simple hamartoma of the RPE, combined hamartoma of the retina and RPE (congenital hypertrophy of the RPE), choroidal nevus, melanocytosis, melanocytoma, melanoma, adenoma, and RPE adenocarcinoma. We describe key diagnostic features using multimodal imaging modalities of ultra-widefield fundus photography, fundus autofluorescence, optical coherence tomography (OCT), enhanced-depth imaging OCT, ultrasonography, fluorescein angiography, indocyanine green angiography, and OCT angiography (OCTA), with particular attention to diagnostic features that could be missed on fundus examination alone. Finally, we review what is on the horizon, including applications of artificial intelligence. Through skilled application of current and emerging imaging technologies, earlier detection of sight- and life-threatening melanocytic ocular fundus tumours can lead to improved patient prognosis.
Collapse
Affiliation(s)
- Sara Verbeek
- Department of Ophthalmology, Mayo Clinic, Rochester, MN
| | | |
Collapse
|
14
|
Serikbaeva A, Li Y, Ma S, Yi D, Kazlauskas A. Resilience to diabetic retinopathy. Prog Retin Eye Res 2024; 101:101271. [PMID: 38740254 PMCID: PMC11262066 DOI: 10.1016/j.preteyeres.2024.101271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/03/2024] [Accepted: 05/10/2024] [Indexed: 05/16/2024]
Abstract
Chronic elevation of blood glucose at first causes relatively minor changes to the neural and vascular components of the retina. As the duration of hyperglycemia persists, the nature and extent of damage increases and becomes readily detectable. While this second, overt manifestation of diabetic retinopathy (DR) has been studied extensively, what prevents maximal damage from the very start of hyperglycemia remains largely unexplored. Recent studies indicate that diabetes (DM) engages mitochondria-based defense during the retinopathy-resistant phase, and thereby enables the retina to remain healthy in the face of hyperglycemia. Such resilience is transient, and its deterioration results in progressive accumulation of retinal damage. The concepts that co-emerge with these discoveries set the stage for novel intellectual and therapeutic opportunities within the DR field. Identification of biomarkers and mediators of protection from DM-mediated damage will enable development of resilience-based therapies that will indefinitely delay the onset of DR.
Collapse
Affiliation(s)
- Anara Serikbaeva
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Yanliang Li
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Simon Ma
- Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA; Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Andrius Kazlauskas
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA; Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA.
| |
Collapse
|
15
|
Papazafiropoulou AK. Diabetes management in the era of artificial intelligence. Arch Med Sci Atheroscler Dis 2024; 9:e122-e128. [PMID: 39086621 PMCID: PMC11289240 DOI: 10.5114/amsad/183420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 01/29/2024] [Indexed: 08/02/2024] Open
Abstract
Artificial intelligence is growing quickly, and its application in the global diabetes pandemic has the potential to completely change the way this chronic illness is identified and treated. Machine learning methods have been used to construct algorithms supporting predictive models for the risk of getting diabetes or its complications. Social media and Internet forums also increase patient participation in diabetes care. Diabetes resource usage optimisation has benefited from technological improvements. As a lifestyle therapy intervention, digital therapies have made a name for themselves in the treatment of diabetes. Artificial intelligence will cause a paradigm shift in diabetes care, moving away from current methods and toward the creation of focused, data-driven precision treatment.
Collapse
|
16
|
Doğan ME, Bilgin AB, Sari R, Bulut M, Akar Y, Aydemir M. Head to head comparison of diagnostic performance of three non-mydriatic cameras for diabetic retinopathy screening with artificial intelligence. Eye (Lond) 2024; 38:1694-1701. [PMID: 38467864 PMCID: PMC11156854 DOI: 10.1038/s41433-024-03000-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 01/24/2024] [Accepted: 02/15/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Diabetic Retinopathy (DR) is a leading cause of blindness worldwide, affecting people with diabetes. The timely diagnosis and treatment of DR are essential in preventing vision loss. Non-mydriatic fundus cameras and artificial intelligence (AI) software have been shown to improve DR screening efficiency. However, few studies have compared the diagnostic performance of different non-mydriatic cameras and AI software. METHODS This clinical study was conducted at the endocrinology clinic of Akdeniz University with 900 volunteer patients that were previously diagnosed with diabetes but not with diabetic retinopathy. Fundus images of each patient were taken using three non-mydriatic fundus cameras and EyeCheckup AI software was used to diagnose more than mild diabetic retinopathy, vision-threatening diabetic retinopathy, and clinically significant diabetic macular oedema using images from all three cameras. Then patients underwent dilation and 4 wide-field fundus photography. Three retina specialists graded the 4 wide-field fundus images according to the diabetic retinopathy treatment preferred practice patterns of the American Academy of Ophthalmology. The study was pre-registered on clinicaltrials.gov with the ClinicalTrials.gov Identifier: NCT04805541. RESULTS The Canon CR2 AF AF camera had a sensitivity and specificity of 95.65% / 95.92% for diagnosing more than mild DR, the Topcon TRC-NW400 had 95.19% / 96.46%, and the Optomed Aurora had 90.48% / 97.21%. For vision threatening diabetic retinopathy, the Canon CR2 AF had a sensitivity and specificity of 96.00% / 96.34%, the Topcon TRC-NW400 had 98.52% / 95.93%, and the Optomed Aurora had 95.12% / 98.82%. For clinically significant diabetic macular oedema, the Canon CR2 AF had a sensitivity and specificity of 95.83% / 96.83%, the Topcon TRC-NW400 had 98.50% / 96.52%, and the Optomed Aurora had 94.93% / 98.95%. CONCLUSION The study demonstrates the potential of using non-mydriatic fundus cameras combined with artificial intelligence software in detecting diabetic retinopathy. Several cameras were tested and, notably, each camera exhibited varying but adequate levels of sensitivity and specificity. The Canon CR2 AF emerged with the highest accuracy in identifying both more than mild diabetic retinopathy and vision-threatening cases, while the Topcon TRC-NW400 excelled in detecting clinically significant diabetic macular oedema. The findings from this study emphasize the importance of considering a non mydriatic camera and artificial intelligence software for diabetic retinopathy screening. However, further research is imperative to explore additional factors influencing the efficiency of diabetic retinopathy screening using AI and non mydriatic cameras such as costs involved and effects of screening using and on an ethnically diverse population.
Collapse
Affiliation(s)
- Mehmet Erkan Doğan
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey.
| | - Ahmet Burak Bilgin
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Ramazan Sari
- Endocrinology and Metabolic Department, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Mehmet Bulut
- Department of Ophthalmology, Antalya Training and Research Hospital, Antalya, Turkey
| | - Yusuf Akar
- Endocrinology and Metabolic Department, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Mustafa Aydemir
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey
| |
Collapse
|
17
|
Oulhadj M, Riffi J, Khodriss C, Mahraz AM, Yahyaouy A, Abdellaoui M, Andaloussi IB, Tairi H. Diabetic retinopathy prediction based on vision transformer and modified capsule network. Comput Biol Med 2024; 175:108523. [PMID: 38701591 DOI: 10.1016/j.compbiomed.2024.108523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 05/05/2024]
Abstract
Diabetic retinopathy is considered one of the most common diseases that can lead to blindness in the working age, and the chance of developing it increases as long as a person suffers from diabetes. Protecting the sight of the patient or decelerating the evolution of this disease depends on its early detection as well as identifying the exact levels of this pathology, which is done manually by ophthalmologists. This manual process is very consuming in terms of the time and experience of an expert ophthalmologist, which makes developing an automated method to aid in the diagnosis of diabetic retinopathy an essential and urgent need. In this paper, we aim to propose a new hybrid deep learning method based on a fine-tuning vision transformer and a modified capsule network for automatic diabetic retinopathy severity level prediction. The proposed approach consists of a new range of computer vision operations, including the power law transformation technique and the contrast-limiting adaptive histogram equalization technique in the preprocessing step. While the classification step builds up on a fine-tuning vision transformer, a modified capsule network, and a classification model combined with a classification model, The effectiveness of our approach was evaluated using four datasets, including the APTOS, Messidor-2, DDR, and EyePACS datasets, for the task of severity levels of diabetic retinopathy. We have attained excellent test accuracy scores on the four datasets, respectively: 88.18%, 87.78%, 80.36%, and 78.64%. Comparing our results with the state-of-the-art, we reached a better performance.
Collapse
Affiliation(s)
- Mohammed Oulhadj
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco.
| | - Jamal Riffi
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco
| | - Chaimae Khodriss
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco; Ophthalmology Department, CHU Mohamed VI, Faculty of Medicine and Pharmacy, Abdelmalek Essaadi University, Tangier, Morocco
| | - Adnane Mohamed Mahraz
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco
| | - Ali Yahyaouy
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco
| | - Meriem Abdellaoui
- Ophthalmology Department, Hassan II Hospital, Sidi Mohamed Ben Abdellah University, Fez, Morocco
| | | | - Hamid Tairi
- LISAC Laboratory, Department of Informatics, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco
| |
Collapse
|
18
|
Romero-Oraá R, Herrero-Tudela M, López MI, Hornero R, García M. Attention-based deep learning framework for automatic fundus image processing to aid in diabetic retinopathy grading. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108160. [PMID: 38583290 DOI: 10.1016/j.cmpb.2024.108160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 01/26/2024] [Accepted: 03/30/2024] [Indexed: 04/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. METHODS Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. RESULTS The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. CONCLUSIONS Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
| | - María Herrero-Tudela
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain
| | - María I López
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - Roberto Hornero
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - María García
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| |
Collapse
|
19
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
20
|
La Franca L, Rutigliani C, Checchin L, Lattanzio R, Bandello F, Cicinelli MV. Rate and Predictors of Misclassification of Active Diabetic Macular Edema as Detected by an Automated Retinal Image Analysis System. Ophthalmol Ther 2024; 13:1553-1567. [PMID: 38587776 PMCID: PMC11109071 DOI: 10.1007/s40123-024-00929-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/07/2024] [Indexed: 04/09/2024] Open
Abstract
INTRODUCTION The aim of this work is to estimate the sensitivity, specificity, and misclassification rate of an automated retinal image analysis system (ARIAS) in diagnosing active diabetic macular edema (DME) and to identify factors associated with true and false positives. METHODS We conducted a cross-sectional study of prospectively enrolled patients with diabetes mellitus (DM) referred to a tertiary medical retina center for screening or management of DME. All patients underwent two-field fundus photography (macula- and disc-centered) with a true-color confocal camera; images were processed by EyeArt V.2.1.0 (Woodland Hills, CA, USA). Active DME was defined as the presence of intraretinal or subretinal fluid on spectral-domain optical coherence tomography (SD-OCT). Sensitivity and specificity and their 95% confidence intervals (CIs) were calculated. Variables associated with true (i.e., DME labeled as present by ARIAS + fluid on SD-OCT) and false positives (i.e., DME labeled as present by ARIAS + no fluid on SD-OCT) of active DME were explored. RESULTS A total of 298 eyes were included; 92 eyes (31%) had active DME. ARIAS sensitivity and specificity were 82.61% (95% CI 72.37-89.60) and 84.47% (95% CI 78.34-89.10). The misclassification rate was 16%. Factors associated with true positives included younger age (p = 0.01), shorter DM duration (p = 0.006), presence of hard exudates (p = 0.005), and microaneurysms (p = 0.002). Factors associated with false positives included longer DM duration (p = 0.01), worse diabetic retinopathy severity (p = 0.008), history of inactivated DME (p < 0.001), and presence of hard exudates (p < 0.001), microaneurysms (p < 0.001), or epiretinal membrane (p = 0.06). CONCLUSIONS The sensitivity of ARIAS was diminished in older patients and those without DME-related fundus lesions, while the specificity was reduced in cases with a history of inactivated DME. ARIAS performed well in screening for naïve DME but is not effective in surveillance inactivated DME.
Collapse
Affiliation(s)
- Lamberto La Franca
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, IRCCS Ospedale San Raffaele, University Vita-Salute, Via Olgettina 60, 20132, Milan, Italy
| | - Carola Rutigliani
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| | - Lisa Checchin
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, IRCCS Ospedale San Raffaele, University Vita-Salute, Via Olgettina 60, 20132, Milan, Italy
| | - Rosangela Lattanzio
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, IRCCS Ospedale San Raffaele, University Vita-Salute, Via Olgettina 60, 20132, Milan, Italy
| | - Francesco Bandello
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, IRCCS Ospedale San Raffaele, University Vita-Salute, Via Olgettina 60, 20132, Milan, Italy
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy
| | - Maria Vittoria Cicinelli
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, IRCCS Ospedale San Raffaele, University Vita-Salute, Via Olgettina 60, 20132, Milan, Italy.
- School of Medicine, Vita-Salute San Raffaele University, Milan, Italy.
| |
Collapse
|
21
|
Mhibik B, Kouadio D, Jung C, Bchir C, Toutée A, Maestri F, Gulic K, Miere A, Falcione A, Touati M, Monnet D, Bodaghi B, Touhami S. AUTOMATED DETECTION OF VITRITIS USING ULTRAWIDE-FIELD FUNDUS PHOTOGRAPHS AND DEEP LEARNING. Retina 2024; 44:1034-1044. [PMID: 38261816 DOI: 10.1097/iae.0000000000004049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
BACKGROUND/PURPOSE Evaluate the performance of a deep learning algorithm for the automated detection and grading of vitritis on ultrawide-field imaging. METHODS Cross-sectional noninterventional study. Ultrawide-field fundus retinophotographs of uveitis patients were used. Vitreous haze was defined according to the six steps of the Standardization of Uveitis Nomenclature classification. The deep learning framework TensorFlow and the DenseNet121 convolutional neural network were used to perform the classification task. The best fitted model was tested in a validation study. RESULTS One thousand one hundred eighty-one images were included. The performance of the model for the detection of vitritis was good with a sensitivity of 91%, a specificity of 89%, an accuracy of 0.90, and an area under the receiver operating characteristics curve of 0.97. When used on an external set of images, the accuracy for the detection of vitritis was 0.78. The accuracy to classify vitritis in one of the six Standardization of Uveitis Nomenclature grades was limited (0.61) but improved to 0.75 when the grades were grouped into three categories. When accepting an error of one grade, the accuracy for the six-class classification increased to 0.90, suggesting the need for a larger sample to improve the model performances. CONCLUSION A new deep learning model based on ultrawide-field fundus imaging that produces an efficient tool for the detection of vitritis was described. The performance of the model for the grading into three categories of increasing vitritis severity was acceptable. The performance for the six-class grading of vitritis was limited but can probably be improved with a larger set of images.
Collapse
Affiliation(s)
- Bayram Mhibik
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Desire Kouadio
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Camille Jung
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Chemsedine Bchir
- Department of Mathematics and Engineering Applications, Sorbonne Université, Paris, France ; and
| | - Adelaide Toutée
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Federico Maestri
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Karmen Gulic
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Alexandra Miere
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Alessandro Falcione
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Myriam Touati
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Dominique Monnet
- Department of Ophthalmology, Université de Paris, Cochin University Hospital, Paris, France
| | - Bahram Bodaghi
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Sara Touhami
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| |
Collapse
|
22
|
Parmar UPS, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Artificial Intelligence (AI) for Early Diagnosis of Retinal Diseases. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:527. [PMID: 38674173 PMCID: PMC11052176 DOI: 10.3390/medicina60040527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/12/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in the field of ophthalmology, revolutionizing disease diagnosis and management. This paper provides a comprehensive overview of AI applications in various retinal diseases, highlighting its potential to enhance screening efficiency, facilitate early diagnosis, and improve patient outcomes. Herein, we elucidate the fundamental concepts of AI, including machine learning (ML) and deep learning (DL), and their application in ophthalmology, underscoring the significance of AI-driven solutions in addressing the complexity and variability of retinal diseases. Furthermore, we delve into the specific applications of AI in retinal diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), Macular Neovascularization, retinopathy of prematurity (ROP), retinal vein occlusion (RVO), hypertensive retinopathy (HR), Retinitis Pigmentosa, Stargardt disease, best vitelliform macular dystrophy, and sickle cell retinopathy. We focus on the current landscape of AI technologies, including various AI models, their performance metrics, and clinical implications. Furthermore, we aim to address challenges and pitfalls associated with the integration of AI in clinical practice, including the "black box phenomenon", biases in data representation, and limitations in comprehensive patient assessment. In conclusion, this review emphasizes the collaborative role of AI alongside healthcare professionals, advocating for a synergistic approach to healthcare delivery. It highlights the importance of leveraging AI to augment, rather than replace, human expertise, thereby maximizing its potential to revolutionize healthcare delivery, mitigate healthcare disparities, and improve patient outcomes in the evolving landscape of medicine.
Collapse
Affiliation(s)
| | - Pier Luigi Surico
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
| | - Rohan Bir Singh
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Francesco Romano
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Carlo Salati
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I, “Sapienza” University of Rome, 00142 Rome, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Tommaso Mori
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
- Department of Ophthalmology, University of California San Diego, La Jolla, CA 92122, USA
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
23
|
Song A, Lusk JB, Roh KM, Hsu ST, Valikodath NG, Lad EM, Muir KW, Engelhard MM, Limkakeng AT, Izatt JA, McNabb RP, Kuo AN. RobOCTNet: Robotics and Deep Learning for Referable Posterior Segment Pathology Detection in an Emergency Department Population. Transl Vis Sci Technol 2024; 13:12. [PMID: 38488431 PMCID: PMC10946693 DOI: 10.1167/tvst.13.3.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/31/2024] [Indexed: 03/19/2024] Open
Abstract
Purpose To evaluate the diagnostic performance of a robotically aligned optical coherence tomography (RAOCT) system coupled with a deep learning model in detecting referable posterior segment pathology in OCT images of emergency department patients. Methods A deep learning model, RobOCTNet, was trained and internally tested to classify OCT images as referable versus non-referable for ophthalmology consultation. For external testing, emergency department patients with signs or symptoms warranting evaluation of the posterior segment were imaged with RAOCT. RobOCTNet was used to classify the images. Model performance was evaluated against a reference standard based on clinical diagnosis and retina specialist OCT review. Results We included 90,250 OCT images for training and 1489 images for internal testing. RobOCTNet achieved an area under the curve (AUC) of 1.00 (95% confidence interval [CI], 0.99-1.00) for detection of referable posterior segment pathology in the internal test set. For external testing, RAOCT was used to image 72 eyes of 38 emergency department patients. In this set, RobOCTNet had an AUC of 0.91 (95% CI, 0.82-0.97), a sensitivity of 95% (95% CI, 87%-100%), and a specificity of 76% (95% CI, 62%-91%). The model's performance was comparable to two human experts' performance. Conclusions A robotically aligned OCT coupled with a deep learning model demonstrated high diagnostic performance in detecting referable posterior segment pathology in a cohort of emergency department patients. Translational Relevance Robotically aligned OCT coupled with a deep learning model may have the potential to improve emergency department patient triage for ophthalmology referral.
Collapse
Affiliation(s)
- Ailin Song
- Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Jay B. Lusk
- Duke University School of Medicine, Durham, NC, USA
| | - Kyung-Min Roh
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - S. Tammy Hsu
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | | | - Eleonora M. Lad
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Kelly W. Muir
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Matthew M. Engelhard
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
| | | | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Ryan P. McNabb
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Anthony N. Kuo
- Department of Ophthalmology, Duke University, Durham, NC, USA
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
24
|
Pradeep K, Jeyakumar V, Bhende M, Shakeel A, Mahadevan S. Artificial intelligence and hemodynamic studies in optical coherence tomography angiography for diabetic retinopathy evaluation: A review. Proc Inst Mech Eng H 2024; 238:3-21. [PMID: 38044619 DOI: 10.1177/09544119231213443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Diabetic retinopathy (DR) is a rapidly emerging retinal abnormality worldwide, which can cause significant vision loss by disrupting the vascular structure in the retina. Recently, optical coherence tomography angiography (OCTA) has emerged as an effective imaging tool for diagnosing and monitoring DR. OCTA produces high-quality 3-dimensional images and provides deeper visualization of retinal vessel capillaries and plexuses. The clinical relevance of OCTA in detecting, classifying, and planning therapeutic procedures for DR patients has been highlighted in various studies. Quantitative indicators obtained from OCTA, such as blood vessel segmentation of the retina, foveal avascular zone (FAZ) extraction, retinal blood vessel density, blood velocity, flow rate, capillary vessel pressure, and retinal oxygen extraction, have been identified as crucial hemodynamic features for screening DR using computer-aided systems in artificial intelligence (AI). AI has the potential to assist physicians and ophthalmologists in developing new treatment options. In this review, we explore how OCTA has impacted the future of DR screening and early diagnosis. It also focuses on how analysis methods have evolved over time in clinical trials. The future of OCTA imaging and its continued use in AI-assisted analysis is promising and will undoubtedly enhance the clinical management of DR.
Collapse
Affiliation(s)
- K Pradeep
- Department of Biomedical Engineering, Chennai Institute of Technology, Chennai, Tamil Nadu, India
| | - Vijay Jeyakumar
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - Muna Bhende
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Areeba Shakeel
- Vitreoretina Department, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Shriraam Mahadevan
- Department of Endocrinology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| |
Collapse
|
25
|
Sarao V, Veritti D, De Nardin A, Misciagna M, Foresti G, Lanzetta P. Explainable artificial intelligence model for the detection of geographic atrophy using colour retinal photographs. BMJ Open Ophthalmol 2023; 8:e001411. [PMID: 38057106 PMCID: PMC10711821 DOI: 10.1136/bmjophth-2023-001411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVE To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.
Collapse
Affiliation(s)
- Valentina Sarao
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| | - Daniele Veritti
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Micaela Misciagna
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Gianluca Foresti
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| |
Collapse
|
26
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 PMCID: PMC10699065 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
27
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
28
|
Carmichael J, Abdi S, Balaskas K, Costanza E, Blandford A. The effectiveness of interventions for optometric referrals into the hospital eye service: A review. Ophthalmic Physiol Opt 2023; 43:1510-1523. [PMID: 37632154 PMCID: PMC10947293 DOI: 10.1111/opo.13219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/05/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
PURPOSE Ophthalmic services are currently under considerable stress; in the UK, ophthalmology departments have the highest number of outpatient appointments of any department within the National Health Service. Recognising the need for intervention, several approaches have been trialled to tackle the high numbers of false-positive referrals initiated in primary care and seen face to face within the hospital eye service (HES). In this mixed-methods narrative synthesis, we explored interventions based on their clinical impact, cost and acceptability to determine whether they are clinically effective, safe and sustainable. A systematic literature search of PubMed, MEDLINE and CINAHL, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), was used to identify appropriate studies published between December 2001 and December 2022. RECENT FINDINGS A total of 55 studies were reviewed. Four main interventions were assessed, where two studies covered more than one type: training and guidelines (n = 8), referral filtering schemes (n = 32), asynchronous teleophthalmology (n = 13) and synchronous teleophthalmology (n = 5). All four approaches demonstrated effectiveness for reducing false-positive referrals to the HES. There was sufficient evidence for stakeholder acceptance and cost-effectiveness of referral filtering schemes; however, cost comparisons involved assumptions. Referral filtering and asynchronous teleophthalmology reported moderate levels of false-negative cases (2%-20%), defined as discharged patients requiring HES monitoring. SUMMARY The effectiveness of interventions varied depending on which outcome and stakeholder was considered. More studies are required to explore stakeholder opinions around all interventions. In order to maximise clinical safety, it may be appropriate to combine more than one approach, such as referral filtering schemes with virtual review of discharged patients to assess the rate of false-negative cases. The implementation of a successful intervention is more complex than a 'one-size-fits-all' approach and there is potential space for newer types of interventions, such as artificial intelligence clinical support systems within the referral pathway.
Collapse
Affiliation(s)
- Josie Carmichael
- University College London Interaction Centre (UCLIC), UCLLondonUK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCLInstitute of OphthalmologyLondonUK
| | - Sarah Abdi
- University College London Interaction Centre (UCLIC), UCLLondonUK
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCLInstitute of OphthalmologyLondonUK
| | - Enrico Costanza
- University College London Interaction Centre (UCLIC), UCLLondonUK
| | - Ann Blandford
- University College London Interaction Centre (UCLIC), UCLLondonUK
| |
Collapse
|
29
|
Rajesh AE, Davidson OQ, Lee CS, Lee AY. Artificial Intelligence and Diabetic Retinopathy: AI Framework, Prospective Studies, Head-to-head Validation, and Cost-effectiveness. Diabetes Care 2023; 46:1728-1739. [PMID: 37729502 PMCID: PMC10516248 DOI: 10.2337/dci23-0032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/15/2023] [Indexed: 09/22/2023]
Abstract
Current guidelines recommend that individuals with diabetes receive yearly eye exams for detection of referable diabetic retinopathy (DR), one of the leading causes of new-onset blindness. For addressing the immense screening burden, artificial intelligence (AI) algorithms have been developed to autonomously screen for DR from fundus photography without human input. Over the last 10 years, many AI algorithms have achieved good sensitivity and specificity (>85%) for detection of referable DR compared with human graders; however, many questions still remain. In this narrative review on AI in DR screening, we discuss key concepts in AI algorithm development as a background for understanding the algorithms. We present the AI algorithms that have been prospectively validated against human graders and demonstrate the variability of reference standards and cohort demographics. We review the limited head-to-head validation studies where investigators attempt to directly compare the available algorithms. Next, we discuss the literature regarding cost-effectiveness, equity and bias, and medicolegal considerations, all of which play a role in the implementation of these AI algorithms in clinical practice. Lastly, we highlight ongoing efforts to bridge gaps in AI model data sets to pursue equitable development and delivery.
Collapse
Affiliation(s)
- Anand E. Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Oliver Q. Davidson
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| |
Collapse
|
30
|
Zhou Y, Chia MA, Wagner SK, Ayhan MS, Williamson DJ, Struyven RR, Liu T, Xu M, Lozano MG, Woodward-Court P, Kihara Y, Altmann A, Lee AY, Topol EJ, Denniston AK, Alexander DC, Keane PA. A foundation model for generalizable disease detection from retinal images. Nature 2023; 622:156-163. [PMID: 37704728 PMCID: PMC10550819 DOI: 10.1038/s41586-023-06555-x] [Citation(s) in RCA: 192] [Impact Index Per Article: 96.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 08/18/2023] [Indexed: 09/15/2023]
Abstract
Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK.
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Mark A Chia
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Murat S Ayhan
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Dominic J Williamson
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Robbert R Struyven
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Timing Liu
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Mateo G Lozano
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Computer Science, University of Coruña, A Coruña, Spain
| | - Peter Woodward-Court
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Yuka Kihara
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Andre Altmann
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Eric J Topol
- Department of Molecular Medicine, Scripps Research, La Jolla, CA, USA
| | - Alastair K Denniston
- Academic Unit of Ophthalmology, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Institute of Ophthalmology, University College London, London, UK.
| |
Collapse
|
31
|
Najm A. Digital health in rheumatology: Where do we stand? How much further do we need to go? Joint Bone Spine 2023; 91:105644. [PMID: 39491422 DOI: 10.1016/j.jbspin.2023.105644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 11/05/2024]
Affiliation(s)
- Aurélie Najm
- School of Infection & Immunity, College of Medical, Veterinary and Life Sciences, Sir Graeme Davies Building, University of Glasgow, 120 University Place G12 8TA, Glasgow, UK.
| |
Collapse
|
32
|
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023; 13:15325. [PMID: 37714881 PMCID: PMC10504307 DOI: 10.1038/s41598-023-42062-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 09/05/2023] [Indexed: 09/17/2023] Open
Abstract
Vessel segmentation in fundus images permits understanding retinal diseases and computing image-based biomarkers. However, manual vessel segmentation is a time-consuming process. Optical coherence tomography angiography (OCT-A) allows direct, non-invasive estimation of retinal vessels. Unfortunately, compared to fundus images, OCT-A cameras are more expensive, less portable, and have a reduced field of view. We present an automated strategy relying on generative adversarial networks to create vascular maps from fundus images without training using manual vessel segmentation maps. Further post-processing used for standard en face OCT-A allows obtaining a vessel segmentation map. We compare our approach to state-of-the-art vessel segmentation algorithms trained on manual vessel segmentation maps and vessel segmentations derived from OCT-A. We evaluate them from an automatic vascular segmentation perspective and as vessel density estimators, i.e., the most common imaging biomarker for OCT-A used in studies. Using OCT-A as a training target over manual vessel delineations yields improved vascular maps for the optic disc area and compares to the best-performing vessel segmentation algorithm in the macular region. This technique could reduce the cost and effort incurred when training vessel segmentation algorithms. To incentivize research in this field, we will make the dataset publicly available to the scientific community.
Collapse
Affiliation(s)
- Ivan Coronado
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Samiksha Pachade
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Emanuele Trucco
- VAMPIRE project, School of Science and Engineering (Computing), University of Dundee, Dundee, Scotland, UK
| | - Rania Abdelkhaleq
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Juntao Yan
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Sergio Salazar-Marioni
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Amanda Jagolino-Cole
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Mozhdeh Bahrainian
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, WI, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, WI, USA
| | - Sunil A Sheth
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Luca Giancardo
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA.
| |
Collapse
|
33
|
Zhu D, Ge A, Chen X, Wang Q, Wu J, Liu S. Supervised Contrastive Learning with Angular Margin for the Detection and Grading of Diabetic Retinopathy. Diagnostics (Basel) 2023; 13:2389. [PMID: 37510133 PMCID: PMC10378050 DOI: 10.3390/diagnostics13142389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/06/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
Many researchers have realized the intelligent medical diagnosis of diabetic retinopathy (DR) from fundus images by using deep learning methods, including supervised contrastive learning (SupCon). However, although SupCon brings label information into the calculation of contrastive learning, it does not distinguish between augmented positives and same-label positives. As a result, we propose the concept of Angular Margin and incorporate it into SupCon to address this issue. To demonstrate the effectiveness of our strategy, we tested it on two datasets for the detection and grading of DR. To align with previous work, Accuracy, Precision, Recall, F1, and AUC were selected as evaluation metrics. Moreover, we also chose alignment and uniformity to verify the effect of representation learning and UMAP (Uniform Manifold Approximation and Projection) to visualize fundus image embeddings. In summary, DR detection achieved state-of-the-art results across all metrics, with Accuracy = 98.91, Precision = 98.93, Recall = 98.90, F1 = 98.91, and AUC = 99.80. The grading also attained state-of-the-art results in terms of Accuracy and AUC, which were 85.61 and 93.97, respectively. The experimental results demonstrate that Angular Margin is an excellent intelligent medical diagnostic algorithm, performing well in both DR detection and grading tasks.
Collapse
Affiliation(s)
- Dongsheng Zhu
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
| | - Aiming Ge
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Xindi Chen
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
| | - Qiuyang Wang
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Jiangbo Wu
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| | - Shuo Liu
- School of Information Science and Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
34
|
Chłopowiec AR, Karanowski K, Skrzypczak T, Grzesiuk M, Chłopowiec AB, Tabakov M. Counteracting Data Bias and Class Imbalance-Towards a Useful and Reliable Retinal Disease Recognition System. Diagnostics (Basel) 2023; 13:diagnostics13111904. [PMID: 37296756 DOI: 10.3390/diagnostics13111904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Multiple studies presented satisfactory performances for the treatment of various ocular diseases. To date, there has been no study that describes a multiclass model, medically accurate, and trained on large diverse dataset. No study has addressed a class imbalance problem in one giant dataset originating from multiple large diverse eye fundus image collections. To ensure a real-life clinical environment and mitigate the problem of biased medical image data, 22 publicly available datasets were merged. To secure medical validity only Diabetic Retinopathy (DR), Age-Related Macular Degeneration (AMD) and Glaucoma (GL) were included. The state-of-the-art models ConvNext, RegNet and ResNet were utilized. In the resulting dataset, there were 86,415 normal, 3787 GL, 632 AMD and 34,379 DR fundus images. ConvNextTiny achieved the best results in terms of recognizing most of the examined eye diseases with the most metrics. The overall accuracy was 80.46 ± 1.48. Specific accuracy values were: 80.01 ± 1.10 for normal eye fundus, 97.20 ± 0.66 for GL, 98.14 ± 0.31 for AMD, 80.66 ± 1.27 for DR. A suitable screening model for the most prevalent retinal diseases in ageing societies was designed. The model was developed on a diverse, combined large dataset which made the obtained results less biased and more generalizable.
Collapse
Affiliation(s)
- Adam R Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Konrad Karanowski
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Tomasz Skrzypczak
- Faculty of Medicine, Wroclaw Medical University, Wybrzeże Ludwika Pasteura 1, 50-367 Wroclaw, Poland
| | - Mateusz Grzesiuk
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Adrian B Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Martin Tabakov
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| |
Collapse
|
35
|
Lupidi M, Danieli L, Fruttini D, Nicolai M, Lassandro N, Chhablani J, Mariotti C. Artificial intelligence in diabetic retinopathy screening: clinical assessment using handheld fundus camera in a real-life setting. Acta Diabetol 2023:10.1007/s00592-023-02104-0. [PMID: 37154944 PMCID: PMC10166040 DOI: 10.1007/s00592-023-02104-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 04/23/2023] [Indexed: 05/10/2023]
Abstract
AIM Diabetic retinopathy (DR) represents the main cause of vision loss among working age people. A prompt screening of this condition may prevent its worst complications. This study aims to validate the in-built artificial intelligence (AI) algorithm Selena+ of a handheld fundus camera (Optomed Aurora, Optomed, Oulu, Finland) in a first line screening of a real-world clinical setting. METHODS It was an observational cross-sectional study including 256 eyes of 256 consecutive patients. The sample included both diabetic and non-diabetic patients. Each patient received a 50°, macula centered, non-mydriatic fundus photography and, after pupil dilation, a complete fundus examination by an experienced retina specialist. All images were after analyzed by a skilled operator and by the AI algorithm. The results of the three procedures were then compared. RESULTS The agreement between the operator-based fundus analysis in bio-microscopy and the fundus photographs was of 100%. Among the DR patients the AI algorithm revealed signs of DR in 121 out of 125 subjects (96.8%) and no signs of DR 122 of the 126 non-diabetic patients (96.8%). The sensitivity of the AI algorithm was 96.8% and the specificity 96.8%. The overall concordance coefficient k (95% CI) between AI-based assessment and fundus biomicroscopy was 0.935 (0.891-0.979). CONCLUSIONS The Aurora fundus camera is effective in a first line screening of DR. Its in-built AI software can be considered a reliable tool to automatically identify the presence of signs of DR and therefore employed as a promising resource in large screening campaigns.
Collapse
Affiliation(s)
- Marco Lupidi
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy.
- Fondazione per la Macula Onlus, Dipartimento di Neuroscienze, Riabilitazione, Oftalmologia, Genetica e Scienze Materno-Infantili (DINOGMI), University Eye Clinic, Genoa, Italy.
| | | | - Daniela Fruttini
- Department of Medicine and Surgery, University of Perugia, S. Maria della Misericordia Hospital, Perugia, Italy
| | - Michele Nicolai
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Nicola Lassandro
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Jay Chhablani
- Department of Ophthalmology, UPMC Eye Center, University of Pittsburgh, Pittsburgh, USA
| | - Cesare Mariotti
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| |
Collapse
|
36
|
Cao S, Zhang R, Jiang A, Kuerban M, Wumaier A, Wu J, Xie K, Aizezi M, Tuersun A, Liang X, Chen R. Application effect of an artificial intelligence-based fundus screening system: evaluation in a clinical setting and population screening. Biomed Eng Online 2023; 22:38. [PMID: 37095516 PMCID: PMC10127070 DOI: 10.1186/s12938-023-01097-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/24/2023] [Indexed: 04/26/2023] Open
Abstract
BACKGROUND To investigate the application effect of artificial intelligence (AI)-based fundus screening system in real-world clinical environment. METHODS A total of 637 color fundus images were included in the analysis of the application of the AI-based fundus screening system in the clinical environment and 20,355 images were analyzed in the population screening. RESULTS The AI-based fundus screening system demonstrated superior diagnostic effectiveness for diabetic retinopathy (DR), retinal vein occlusion (RVO) and pathological myopia (PM) according to gold standard referral. The sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of three fundus abnormalities were greater (all > 80%) than those for age-related macular degeneration (ARMD), referable glaucoma and other abnormalities. The percentages of different diagnostic conditions were similar in both the clinical environment and the population screening. CONCLUSIONS In a real-world setting, our AI-based fundus screening system could detect 7 conditions, with better performance for DR, RVO and PM. Testing in the clinical environment and through population screening demonstrated the clinical utility of our AI-based fundus screening system in the early detection of ocular fundus abnormalities and the prevention of blindness.
Collapse
Affiliation(s)
- Shujuan Cao
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Rongpei Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aixin Jiang
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mayila Kuerban
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aizezi Wumaier
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Jianhua Wu
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Kaihua Xie
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mireayi Aizezi
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Abudurexiti Tuersun
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Xuanwei Liang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| | - Rongxin Chen
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| |
Collapse
|
37
|
Abreu-Gonzalez R, Rodríguez-Martín JN, Quezada-Peralta G, Rodrigo-Bello JJ, Gil-Hernández MA, Bermúdez-Pérez C, Donate-López J. Retinal age as a predictive biomarker of the diabetic retinopathy grade. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2023; 98:265-269. [PMID: 37075840 DOI: 10.1016/j.oftale.2023.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 02/12/2023] [Indexed: 04/21/2023]
Abstract
OBJECTIVE To apply artificial intelligence (AI) techniques, through deep learning algorithms, for the development and optimization of a system for predicting the age of a person based on a color retinography and to study a possible relationship between the evolution of retinopathy diabetes and premature aging of the retina. METHODS A convolutional network was trained to calculate the age of a person based on a retinography. Said training was carried out on a set of retinographies of patients with diabetes previously divided into three subsets (training, validation and test). The difference between the chronological age of the patient and the biological age of the retina was defined as the retinal age gap. RESULTS A set of 98,400 images was used for the training phase, 1,000 images for the validation phase and 13,544 for the test phase. The retinal gap of the patients without DR was 0.609 years and that of the patients with DR was 1,905 years (p < 0.001), with the distribution by degree of DR being: mild DR: 1,541 years, moderate DR: 3,017 years, DR severe: 3,117 years and proliferative DR: 8,583 years. CONCLUSIONS The retinal age gap shows a positive mean difference between diabetics with DR versus those without DR, and it increases progressively, according to the degree of DR. These results could indicate the existence of a relationship between the evolution of the disease and premature aging of the retina.
Collapse
Affiliation(s)
- R Abreu-Gonzalez
- Servicio de Oftalmología, Hospital Universitario Nuestra Señora de Candelaria, Santa Cruz de Tenerife, Spain.
| | - J N Rodríguez-Martín
- Servicio de Tecnologías de la Información, Hospital Universitario Nuestra Señora de Candelaria, Santa Cruz de Tenerife, Spain
| | - G Quezada-Peralta
- Servicio de Oftalmología, Hospital Universitario Nuestra Señora de Candelaria, Santa Cruz de Tenerife, Spain
| | - J J Rodrigo-Bello
- Grafcan Cartográfica de Canarias, S. A., Santa Cruz de Tenerife, Spain
| | - M A Gil-Hernández
- Servicio de Oftalmología, Hospital Universitario Nuestra Señora de Candelaria, Santa Cruz de Tenerife, Spain
| | - C Bermúdez-Pérez
- Servicio de Tecnologías de la Información, Hospital Universitario Nuestra Señora de Candelaria, Santa Cruz de Tenerife, Spain
| | - J Donate-López
- Servicio de Oftalmología, Hospital Clínico Universitario San Carlos, Madrid, Spain
| |
Collapse
|
38
|
Artificial Intelligence for Diabetic Retinopathy Screening Using Color Retinal Photographs: From Development to Deployment. Ophthalmol Ther 2023; 12:1419-1437. [PMID: 36862308 PMCID: PMC10164194 DOI: 10.1007/s40123-023-00691-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 02/14/2023] [Indexed: 03/03/2023] Open
Abstract
Diabetic retinopathy (DR), a leading cause of preventable blindness, is expected to remain a growing health burden worldwide. Screening to detect early sight-threatening lesions of DR can reduce the burden of vision loss; nevertheless, the process requires intensive manual labor and extensive resources to accommodate the increasing number of patients with diabetes. Artificial intelligence (AI) has been shown to be an effective tool which can potentially lower the burden of screening DR and vision loss. In this article, we review the use of AI for DR screening on color retinal photographs in different phases of application, ranging from development to deployment. Early studies of machine learning (ML)-based algorithms using feature extraction to detect DR achieved a high sensitivity but relatively lower specificity. Robust sensitivity and specificity were achieved with the application of deep learning (DL), although ML is still used in some tasks. Public datasets were utilized in retrospective validations of the developmental phases in most algorithms, which require a large number of photographs. Large prospective clinical validation studies led to the approval of DL for autonomous screening of DR although the semi-autonomous approach may be preferable in some real-world settings. There have been few reports on real-world implementations of DL for DR screening. It is possible that AI may improve some real-world indicators for eye care in DR, such as increased screening uptake and referral adherence, but this has not been proven. The challenges in deployment may include workflow issues, such as mydriasis to lower ungradable cases; technical issues, such as integration into electronic health record systems and integration into existing camera systems; ethical issues, such as data privacy and security; acceptance of personnel and patients; and health-economic issues, such as the need to conduct health economic evaluations of using AI in the context of the country. The deployment of AI for DR screening should follow the governance model for AI in healthcare which outlines four main components: fairness, transparency, trustworthiness, and accountability.
Collapse
|
39
|
Computational intelligence in eye disease diagnosis: a comparative study. Med Biol Eng Comput 2023; 61:593-615. [PMID: 36595155 DOI: 10.1007/s11517-022-02737-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/09/2022] [Indexed: 01/04/2023]
Abstract
In recent years, eye disorders are an important health issue among older people. Generally, individuals with eye diseases are unaware of the gradual growth of symptoms. Therefore, routine eye examinations are required for early diagnosis. Usually, eye disorders are identified by an ophthalmologist via a slit-lamp investigation. Slit-lamp interpretations are inadequate due to the differences in the analytical skills of the ophthalmologist, inconsistency in eye disorder analysis, and record maintenance issues. Therefore, digital images of an eye and computational intelligence (CI)-based approaches are preferred as assistive methods for eye disease diagnosis. A comparative study of CI-based decision support models for eye disorder diagnosis is presented in this paper. The CI-based decision support systems used for eye abnormalities diagnosis were grouped as anterior and retinal eye abnormalities diagnostic systems, and numerous algorithms used for diagnosing the eye abnormalities were also briefed. Various eye imaging modalities, pre-processing methods such as reflection removal, contrast enhancement, region of interest segmentation methods, and public eye image databases used for CI-based eye disease diagnosis system development were also discussed in this paper. In this comparative study, the reliability of various CI-based systems used for anterior eye and retinal disorder diagnosis was compared based on the precision, sensitivity, and specificity in eye disease diagnosis. The outcomes of the comparative analysis indicate that the CI-based anterior and retinal disease diagnosis systems attained significant prediction accuracy. Hence, these CI-based diagnosis systems can be used in clinics to reduce the burden on physicians, minimize fatigue-related misdetection, and take precise clinical decisions.
Collapse
|
40
|
Prediction of postoperative infection in elderly using deep learning-based analysis: an observational cohort study. Aging Clin Exp Res 2023; 35:639-647. [PMID: 36598653 PMCID: PMC10014765 DOI: 10.1007/s40520-022-02325-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 12/09/2022] [Indexed: 01/05/2023]
Abstract
Elderly patients are susceptible to postoperative infections with increased mortality. Analyzing with a deep learning model, the perioperative factors that could predict and/or contribute to postoperative infections may improve the outcome in elderly. This was an observational cohort study with 2014 elderly patients who had elective surgery from 28 hospitals in China from April to June 2014. We aimed to develop and validate deep learning-based predictive models for postoperative infections in the elderly. 1510 patients were randomly assigned to be training dataset for establishing deep learning-based models, and 504 patients were used to validate the effectiveness of these models. The conventional model predicted postoperative infections was 0.728 (95% CI 0.688-0.768) with the sensitivity of 66.2% (95% CI 58.2-73.6) and specificity of 66.8% (95% CI 64.6-68.9). The deep learning model including risk factors relevant to baseline clinical characteristics predicted postoperative infections was 0.641 (95% CI 0.545-0.737), and sensitivity and specificity were 34.2% (95% CI 19.6-51.4) and 88.8% (95% CI 85.6-91.6), respectively. Including risk factors relevant to baseline variables and surgery, the deep learning model predicted postoperative infections was 0.763 (95% CI 0.681-0.844) with the sensitivity of 63.2% (95% CI 46-78.2) and specificity of 80.5% (95% CI 76.6-84). Our feasibility study indicated that a deep learning model including risk factors for the prediction of postoperative infections can be achieved in elderly. Further study is needed to assess whether this model can be used to guide clinical practice to improve surgical outcomes in elderly.
Collapse
|
41
|
Liu L, Wu X, Lin D, Zhao L, Li M, Yun D, Lin Z, Pang J, Li L, Wu Y, Lai W, Xiao W, Shang Y, Feng W, Tan X, Li Q, Liu S, Lin X, Sun J, Zhao Y, Yang X, Ye Q, Zhong Y, Huang X, He Y, Fu Z, Xiang Y, Zhang L, Zhao M, Qu J, Xu F, Lu P, Li J, Xu F, Wei W, Dong L, Dai G, He X, Yan W, Zhu Q, Lu L, Zhang J, Zhou W, Meng X, Li S, Shen M, Jiang Q, Chen N, Zhou X, Li M, Wang Y, Zou H, Zhong H, Yang W, Shou W, Zhong X, Yang Z, Ding L, Hu Y, Tan G, He W, Zhao X, Chen Y, Liu Y, Lin H. DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiao Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Qiang Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shenzhen Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinxin Lin
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiaxin Sun
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqi Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ximei Yang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qinying Ye
- Department of Ophthalmology, Second Affiliated Hospital, Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yuesi Zhong
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xi Huang
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan He
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Ziwei Fu
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Yi Xiang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Li Zhang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Mingwei Zhao
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Jinfeng Qu
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Peng Lu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Xingru He
- School of Public Health, He University, Shenyang, Liaoning, China
| | - Wentao Yan
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaolin Zhu
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Linna Lu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiaying Zhang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Zhou
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiangda Meng
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Nan Chen
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Xingtao Zhou
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Meiyan Li
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Yan Wang
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Haohan Zou
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Hua Zhong
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wenyan Yang
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wulin Shou
- Jiaxing Chaoju Eye Hospital, Jiaxing, Zhejiang, China
| | - Xingwu Zhong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Zhenduo Yang
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Yongcheng Hu
- Bayannur Xudong Eye Hospital, Bayannur, Inner Mongolia, China
| | - Gang Tan
- Department of Ophthalmology, The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, Hunan, China
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
42
|
Peeters F, Rommes S, Elen B, Gerrits N, Stalmans I, Jacob J, De Boever P. Artificial Intelligence Software for Diabetic Eye Screening: Diagnostic Performance and Impact of Stratification. J Clin Med 2023; 12:jcm12041408. [PMID: 36835942 PMCID: PMC9967595 DOI: 10.3390/jcm12041408] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/31/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
AIM To evaluate the MONA.health artificial intelligence screening software for detecting referable diabetic retinopathy (DR) and diabetic macular edema (DME), including subgroup analysis. METHODS The algorithm's threshold value was fixed at the 90% sensitivity operating point on the receiver operating curve to perform the disease classification. Diagnostic performance was appraised on a private test set and publicly available datasets. Stratification analysis was executed on the private test set considering age, ethnicity, sex, insulin dependency, year of examination, camera type, image quality, and dilatation status. RESULTS The software displayed an area under the curve (AUC) of 97.28% for DR and 98.08% for DME on the private test set. The specificity and sensitivity for combined DR and DME predictions were 94.24 and 90.91%, respectively. The AUC ranged from 96.91 to 97.99% on the publicly available datasets for DR. AUC values were above 95% in all subgroups, with lower predictive values found for individuals above the age of 65 (82.51% sensitivity) and Caucasians (84.03% sensitivity). CONCLUSION We report good overall performance of the MONA.health screening software for DR and DME. The software performance remains stable with no significant deterioration of the deep learning models in any studied strata.
Collapse
Affiliation(s)
- Freya Peeters
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Correspondence:
| | - Stef Rommes
- MONA.health, 3060 Bertem, Belgium
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Nele Gerrits
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Ingeborg Stalmans
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Julie Jacob
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Patrick De Boever
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
- Centre for Environmental Sciences, Hasselt University, Diepenbeek, 3500 Hasselt, Belgium
| |
Collapse
|
43
|
Comparing Deep Feature Extraction Strategies for Diabetic Retinopathy Stage Classification from Fundus Images. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2023. [DOI: 10.1007/s13369-022-07547-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
44
|
Han Z, Yang B, Deng S, Li Z, Tong Z. Category weighted network and relation weighted label for diabetic retinopathy screening. Comput Biol Med 2023; 152:106408. [PMID: 36516580 DOI: 10.1016/j.compbiomed.2022.106408] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 11/10/2022] [Accepted: 12/03/2022] [Indexed: 12/08/2022]
Abstract
Diabetic retinopathy (DR) is the primary cause of blindness in adults. Incorporating machine learning into DR grading can improve the accuracy of medical diagnosis. However, problems, such as severe data imbalance, persists. Existing studies on DR grading ignore the correlation between its labels. In this study, a category weighted network (CWN) was proposed to achieve data balance at the model level. In the CWN, a reference for weight settings is provided by calculating the category gradient norm and reducing the experimental overhead. We proposed to use relation weighted labels instead of the one-hot label to investigate the distance relationship between labels. Experiments revealed that the proposed CWN achieved excellent performance on various DR datasets. Furthermore, relation weighted labels exhibit broad applicability and can improve other methods using one-hot labels. The proposed method achieved kappa scores of 0.9431 and 0.9226 and accuracy of 90.94% and 86.12% on DDR and APTOS datasets, respectively.
Collapse
Affiliation(s)
- Zhike Han
- Zhejiang University, Hangzhou, 310027, Zhejiang, China; Zhejiang University City College, Hangzhou, 310015, Zhejiang, China
| | - Bin Yang
- Zhejiang University, Hangzhou, 310027, Zhejiang, China
| | | | - Zhuorong Li
- Zhejiang University City College, Hangzhou, 310015, Zhejiang, China.
| | - Zhou Tong
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310058, Zhejiang, China
| |
Collapse
|
45
|
Du J, Huang M, Liu L. AI-Aided Disease Prediction in Visualized Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1199:107-126. [PMID: 37460729 DOI: 10.1007/978-981-32-9902-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Artificial intelligence (AI) is playing a vitally important role in promoting the revolution of future technology. Healthcare is one of the promising applications in AI, which covers medical imaging, diagnosis, robotics, disease prediction, pharmacy, health management, and hospital management. Numbers of achievements that made in these fields overturn every aspect in traditional healthcare system. Therefore, to understand the state-of-art AI in healthcare, as well as the chances and obstacles in its development, the applications of AI in disease detection and outlook and the future trends of AI-aided disease prediction were discussed in this chapter.
Collapse
Affiliation(s)
- Juan Du
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
| | - Mengen Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Lin Liu
- Tianjin Key Laboratory of Retinal Functions and Diseases, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
46
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
47
|
Barriada RG, Masip D. An Overview of Deep-Learning-Based Methods for Cardiovascular Risk Assessment with Retinal Images. Diagnostics (Basel) 2022; 13:diagnostics13010068. [PMID: 36611360 PMCID: PMC9818382 DOI: 10.3390/diagnostics13010068] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/19/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Cardiovascular diseases (CVDs) are one of the most prevalent causes of premature death. Early detection is crucial to prevent and address CVDs in a timely manner. Recent advances in oculomics show that retina fundus imaging (RFI) can carry relevant information for the early diagnosis of several systemic diseases. There is a large corpus of RFI systematically acquired for diagnosing eye-related diseases that could be used for CVDs prevention. Nevertheless, public health systems cannot afford to dedicate expert physicians to only deal with this data, posing the need for automated diagnosis tools that can raise alarms for patients at risk. Artificial Intelligence (AI) and, particularly, deep learning models, became a strong alternative to provide computerized pre-diagnosis for patient risk retrieval. This paper provides a novel review of the major achievements of the recent state-of-the-art DL approaches to automated CVDs diagnosis. This overview gathers commonly used datasets, pre-processing techniques, evaluation metrics and deep learning approaches used in 30 different studies. Based on the reviewed articles, this work proposes a classification taxonomy depending on the prediction target and summarizes future research challenges that have to be tackled to progress in this line.
Collapse
|
48
|
Pre-hospital prediction of adverse outcomes in patients with suspected COVID-19: Development, application and comparison of machine learning and deep learning methods. Comput Biol Med 2022; 151:106024. [PMID: 36327887 PMCID: PMC9420071 DOI: 10.1016/j.compbiomed.2022.106024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 08/02/2022] [Accepted: 08/20/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND COVID-19 infected millions of people and increased mortality worldwide. Patients with suspected COVID-19 utilised emergency medical services (EMS) and attended emergency departments, resulting in increased pressures and waiting times. Rapid and accurate decision-making is required to identify patients at high-risk of clinical deterioration following COVID-19 infection, whilst also avoiding unnecessary hospital admissions. Our study aimed to develop artificial intelligence models to predict adverse outcomes in suspected COVID-19 patients attended by EMS clinicians. METHOD Linked ambulance service data were obtained for 7,549 adult patients with suspected COVID-19 infection attended by EMS clinicians in the Yorkshire and Humber region (England) from 18-03-2020 to 29-06-2020. We used support vector machines (SVM), extreme gradient boosting, artificial neural network (ANN) models, ensemble learning methods and logistic regression to predict the primary outcome (death or need for organ support within 30 days). Models were compared with two baselines: the decision made by EMS clinicians to convey patients to hospital, and the PRIEST clinical severity score. RESULTS Of the 7,549 patients attended by EMS clinicians, 1,330 (17.6%) experienced the primary outcome. Machine Learning methods showed slight improvements in sensitivity over baseline results. Further improvements were obtained using stacking ensemble methods, the best geometric mean (GM) results were obtained using SVM and ANN as base learners when maximising sensitivity and specificity. CONCLUSIONS These methods could potentially reduce the numbers of patients conveyed to hospital without a concomitant increase in adverse outcomes. Further work is required to test the models externally and develop an automated system for use in clinical settings.
Collapse
|
49
|
Martins TGDS, Schor P, Mendes LGA, Fowler S, Silva R. Use of artificial intelligence in ophthalmology: a narrative review. SAO PAULO MED J 2022; 140:837-845. [PMID: 36043665 PMCID: PMC9671570 DOI: 10.1590/1516-3180.2021.0713.r1.22022022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 02/22/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) deals with development of algorithms that seek to perceive one's environment and perform actions that maximize one's chance of successfully reaching one's predetermined goals. OBJECTIVE To provide an overview of the basic principles of AI and its main studies in the fields of glaucoma, retinopathy of prematurity, age-related macular degeneration and diabetic retinopathy. From this perspective, the limitations and potential challenges that have accompanied the implementation and development of this new technology within ophthalmology are presented. DESIGN AND SETTING Narrative review developed by a research group at the Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil. METHODS We searched the literature on the main applications of AI within ophthalmology, using the keywords "artificial intelligence", "diabetic retinopathy", "macular degeneration age-related", "glaucoma" and "retinopathy of prematurity," covering the period from January 1, 2007, to May 3, 2021. We used the MEDLINE database (via PubMed) and the LILACS database (via Virtual Health Library) to identify relevant articles. RESULTS We retrieved 457 references, of which 47 were considered eligible for intensive review and critical analysis. CONCLUSION Use of technology, as embodied in AI algorithms, is a way of providing an increasingly accurate service and enhancing scientific research. This forms a source of complement and innovation in relation to the daily skills of ophthalmologists. Thus, AI adds technology to human expertise.
Collapse
Affiliation(s)
- Thiago Gonçalves dos Santos Martins
- MD, PhD. Researcher, Department of Ophthalmology, Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil; Research Fellow, Department of Ophthalmology, Ludwig Maximilians University (LMU), Munich, Germany; and Doctoral Student, University of Coimbra (UC), Coimbra, Portugal
| | - Paulo Schor
- PhD. Professor, Department of Ophthalmology, Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil
| | | | - Susan Fowler
- RN, PhD. Certified Neuroscience Registered Nurse (CNRN) and Research Fellow of American Heart Association, Department of Ophthalmology, Orlando Health, Orlando, United States; Researcher, Department of Ophthalmology, Walden University, Minneapolis (MN), United States; and Researcher, Department of Ophthalmology, Thomas Edison State University (TESU), Trenton (NJ), United States
| | - Rufino Silva
- MD, PhD. Fellow of the European Board of Ophthalmology and Professor, Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal; Fellow, Department of Ophthalmology, Centro Hospitalar e Universitário de Coimbra (CHUC), Coimbra, Portugal; and Researcher, Association for Innovation and Biomedical Research on Light and Image (AIBILI), Coimbra, Portugal
| |
Collapse
|
50
|
Katz O, Presil D, Cohen L, Nachmani R, Kirshner N, Hoch Y, Lev T, Hadad A, Hewitt RJ, Owens DR. Evaluation of a New Neural Network Classifier for Diabetic Retinopathy. J Diabetes Sci Technol 2022; 16:1401-1409. [PMID: 34549633 PMCID: PMC9631541 DOI: 10.1177/19322968211042665] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Medical image segmentation is a well-studied subject within the field of image processing. The goal of this research is to create an AI retinal screening grading system that is both accurate and fast. We introduce a new segmentation network which achieves state-of-the-art results on semantic segmentation of color fundus photographs. By applying the net-work to identify anatomical markers of diabetic retinopathy (DR) and diabetic macular edema (DME), we collect sufficient information to classify patients by grades R0 and R1 or above, M0 and M1. METHODS The AI grading system was trained on screening data to evaluate the presence of DR and DME. The core algorithm of the system is a deep learning network that segments relevant anatomical features in a retinal image. Patients were graded according to the standard NHS Diabetic Eye Screening Program feature-based grading protocol. RESULTS The algorithm performance was evaluated with a series of 6,981 patient retinal images from routine diabetic eye screenings. It correctly predicted 98.9% of retinopathy events and 95.5% of maculopathy events. Non-disease events prediction rate was 68.6% for retinopathy and 81.2% for maculopathy. CONCLUSION This novel deep learning model was trained and tested on patient data from annual diabetic retinopathy screenings can classify with high accuracy the DR and DME status of a person with diabetes. The system can be easily reconfigured according to any grading protocol, without running a long AI training procedure. The incorporation of the AI grading system can increase the graders' productivity and improve the final outcome accuracy of the screening process.
Collapse
Affiliation(s)
- Or Katz
- NEC Israeli Research Center, Herzeliya,
Israel
| | - Dan Presil
- NEC Israeli Research Center, Herzeliya,
Israel
- Dan Presil, BSc, NEC Israeli Research
Center, 2 Maskit, Herzeliya, Israel.
| | - Liz Cohen
- NEC Israeli Research Center, Herzeliya,
Israel
| | | | | | - Yaacov Hoch
- NEC Israeli Research Center, Herzeliya,
Israel
| | - Tsvi Lev
- NEC Israeli Research Center, Herzeliya,
Israel
| | - Aviel Hadad
- MD MPH, Ophthalmology Department,
Soroka University Medical Center, Be’er Sheva, South District, Israel
| | | | - David R Owens
- Professor of Diabetes, Swansea
University Medical School, Swansea, Wales, UK
| |
Collapse
|