1
|
Ayers AT, Ho CN, Kerr D, Cichosz SL, Mathioudakis N, Wang M, Najafi B, Moon SJ, Pandey A, Klonoff DC. Artificial Intelligence to Diagnose Complications of Diabetes. J Diabetes Sci Technol 2025; 19:246-264. [PMID: 39578435 DOI: 10.1177/19322968241287773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2024]
Abstract
Artificial intelligence (AI) is increasingly being used to diagnose complications of diabetes. Artificial intelligence is technology that enables computers and machines to simulate human intelligence and solve complicated problems. In this article, we address current and likely future applications for AI to be applied to diabetes and its complications, including pharmacoadherence to therapy, diagnosis of hypoglycemia, diabetic eye disease, diabetic kidney diseases, diabetic neuropathy, diabetic foot ulcers, and heart failure in diabetes.Artificial intelligence is advantageous because it can handle large and complex datasets from a variety of sources. With each additional type of data incorporated into a clinical picture of a patient, the calculation becomes increasingly complex and specific. Artificial intelligence is the foundation of emerging medical technologies; it will power the future of diagnosing diabetes complications.
Collapse
Affiliation(s)
| | - Cindy N Ho
- Diabetes Technology Society, Burlingame, CA, USA
| | - David Kerr
- Center for Health Systems Research, Sutter Health, Santa Barbara, CA, USA
| | - Simon Lebech Cichosz
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | | | - Michelle Wang
- University of California, San Francisco, San Francisco, CA, USA
| | - Bijan Najafi
- Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX, USA
- Center for Advanced Surgical and Interventional Technology (CASIT), Department of Surgery, Geffen School of Medicine, University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| | - Sun-Joon Moon
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul, Republic of Korea
| | - Ambarish Pandey
- Division of Cardiology and Geriatrics, Department of Internal Medicine, UT Southwestern Medical Center, Dallas, TX, USA
| | - David C Klonoff
- Diabetes Technology Society, Burlingame, CA, USA
- Diabetes Research Institute, Mills-Peninsula Medical Center, San Mateo, CA, USA
| |
Collapse
|
2
|
Murugan SRB, Sanjay S, Somanath A, Mahendradas P, Patil A, Kaur K, Gurnani B. Artificial Intelligence in Uveitis: Innovations in Diagnosis and Therapeutic Strategies. Clin Ophthalmol 2024; 18:3753-3766. [PMID: 39703602 PMCID: PMC11656483 DOI: 10.2147/opth.s495307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Accepted: 12/06/2024] [Indexed: 12/21/2024] Open
Abstract
In the dynamic field of ophthalmology, artificial intelligence (AI) is emerging as a transformative tool in managing complex conditions like uveitis. Characterized by diverse inflammatory responses, uveitis presents significant diagnostic and therapeutic challenges. This systematic review explores the role of AI in advancing diagnostic precision, optimizing therapeutic approaches, and improving patient outcomes in uveitis care. A comprehensive search of PubMed, Scopus, Google Scholar, Web of Science, and Embase identified over 10,000 articles using primary and secondary keywords related to AI and uveitis. Rigorous screening based on predefined criteria reduced the pool to 52 high-quality studies, categorized into six themes: diagnostic support algorithms, screening algorithms, standardization of Uveitis Nomenclature (SUN), AI applications in management, systemic implications of AI, and limitations with future directions. AI technologies, including machine learning (ML) and deep learning (DL), demonstrated proficiency in anterior chamber inflammation detection, vitreous haze grading, and screening for conditions like ocular toxoplasmosis. Despite these advancements, challenges such as dataset quality, algorithmic transparency, and ethical concerns persist. Future research should focus on developing robust, multimodal AI systems and fostering collaboration among academia and industry to ensure equitable, ethical, and effective AI applications. The integration of AI heralds a new era in uveitis management, emphasizing precision medicine and enhanced care delivery.
Collapse
Affiliation(s)
- Siva Raman Bala Murugan
- Department of Uveitis and Ocular Inflammation Uveitis Clinic, Aravind Eye Hospital, Pondicherry, 605007, India
| | - Srinivasan Sanjay
- Department of Clinical Services, Singapore National Eye Centre, Third Hospital Ave, Singapore City, 168751, Singapore
| | - Anjana Somanath
- Department of Uveitis and Ocular Inflammation, Aravind Eye Hospital, Madurai, Tamil Nadu
| | - Padmamalini Mahendradas
- Department of Uveitis and Ocular Immunology, Narayana Nethralaya, Bangalore, Karnataka, 560010, India
| | - Aditya Patil
- Department of Uveitis and Ocular Immunology, Narayana Nethralaya, Bangalore, Karnataka, 560010, India
| | - Kirandeep Kaur
- Department of Cataract, Pediatric Ophthalmology and Strabismus, Gomabai Netralaya and Research Centre, Neemuch, Madhya Pradesh, 458441, India
| | - Bharat Gurnani
- Department of Cataract, Cornea and Refractive Surgery, Gomabai Netralaya and Research Centre, Neemuch, Madhya Pradesh, 458441, India
| |
Collapse
|
3
|
Li Z, Zhang Y, Chen Z, Chen J, Hou H, Wang C, Lu Z, Wang X, Geng X, Liu F. Correlation analysis and recurrence evaluation system for patients with recurrent hepatolithiasis: a multicentre retrospective study. Front Digit Health 2024; 6:1510674. [PMID: 39664398 PMCID: PMC11631919 DOI: 10.3389/fdgth.2024.1510674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 10/30/2024] [Indexed: 12/13/2024] Open
Abstract
Background Methods for accurately predicting the prognosis of patients with recurrent hepatolithiasis (RH) after biliary surgery are lacking. This study aimed to develop a model that dynamically predicts the risk of hepatolithiasis recurrence using a machine-learning (ML) approach based on multiple clinical high-order correlation data. Materials and methods Data from patients with RH who underwent surgery at five centres between January 2015 and December 2020 were collected and divided into training and testing sets. Nine predictive models, which we named the Correlation Analysis and Recurrence Evaluation System (CARES), were developed and compared using machine learning (ML) methods to predict the patients' dynamic recurrence risk within 5 post-operative years. We adopted a k-fold cross validation with k = 10 and tested model performance on a separate testing set. The area under the receiver operating characteristic curve was used to evaluate the performance of the models, and the significance and direction of each predictive variable were interpreted and justified based on Shapley Additive Explanations. Results Models based on ML methods outperformed those based on traditional regression analysis in predicting the recurrent risk of patients with RH, with Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) showing the best performance, both yielding an AUC (Area Under the receiver operating characteristic Curve) of∼0.9 or higher at predictions. These models were proved to have even better performance on testing sets than in a 10-fold cross validation, indicating that the model was not overfitted. The SHAP method revealed that immediate stone clearance, final stone clearance, number of previous surgeries, and preoperative CA19-9 index were the most important predictors of recurrence after reoperation in RH patients. An online version of the CARES model was implemented. Conclusion The CARES model was firstly developed based on ML methods and further encapsulated into an online version for predicting the recurrence of patients with RH after hepatectomy, which can guide clinical decision-making and personalised postoperative surveillance.
Collapse
Affiliation(s)
- Zihan Li
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Cardiology Division, Department of Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Yibo Zhang
- Department of Analytics, Marketing and Operations, Imperial College London, London, United Kingdom
| | - Zixiang Chen
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jiangming Chen
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Hui Hou
- Department of General Surgery, The Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Cheng Wang
- Department of General Surgery, The First Affiliated Hospital of the University of Science and Technology of China, Hefei, China
| | - Zheng Lu
- Department of General Surgery, The First Affiliated Hospital of Bengbu Medical College, Bengbu, China
| | - Xiaoming Wang
- Department of General Surgery, The First Affiliated Hospital of Wannan Medical College, Wuhu, China
| | - Xiaoping Geng
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fubao Liu
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| |
Collapse
|
4
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
5
|
Gill GS, Blair J, Litinsky S. Evaluating the Performance of ChatGPT 3.5 and 4.0 on StatPearls Oculoplastic Surgery Text- and Image-Based Exam Questions. Cureus 2024; 16:e73812. [PMID: 39691123 PMCID: PMC11650114 DOI: 10.7759/cureus.73812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Accepted: 10/27/2024] [Indexed: 12/19/2024] Open
Abstract
INTRODUCTION The emergence of large language models (LLMs) has led to significant interest in their potential use as medical assistive tools. Prior investigations have analyzed the overall comparative performance of LLM versions within different ophthalmology subspecialties. However, limited investigations have characterized LLM performance on image-based questions, a recent advance in LLM capabilities. The purpose of this study was to evaluate the performance of Chat Generative Pre-Trained Transformers (ChatGPT) versions 3.5 and 4.0 on image-based and text-only questions using oculoplastic subspecialty questions from StatPearls and OphthoQuestions question banks. METHODS This study utilized 343 non-image questions from StatPearls, 127 images from StatPearls, and 89 OphthoQuestions. All of these questions were specific to Oculoplastics. The information collected included correctness, distribution of answers, and if an additional prompt was necessary. Text-only questions were compared between ChatGPT-3.5 and ChatGPT-4.0. Also, text-only and multimodal (image-based) questions answered by ChatGPT-4.0 were compared. RESULTS ChatGPT-3.5 answered 56.85% (195/343) of text-only questions correctly, while ChatGPT-4.0 achieved 73.46% (252/343), showing a statistically significant difference in accuracy (p<0.05). The biserial correlation between ChatGPT-3.5 and human performance on the StatPearls question bank was 0.198, with a standard deviation of 0.195. When ChatGPT-3.5 was incorrect, the average human correctness was 49.39% (SD 26.27%), and when it was correct, human correctness averaged 57.82% (SD 30.14%) with a t-statistic of 3.57 and a p-value of 0.0004. For ChatGPT-4.0, the biserial correlation was 0.226 (SD 0.213). When ChatGPT-4.0 was incorrect, human correctness averaged 45.49% (SD 24.85%), and when it was correct, human correctness was 57.02% (SD 29.75%) with a t-statistic of 4.28 and a p-value of 0.0006. On image-only questions, ChatGPT-4.0 correctly answered 56.94% (123/216), significantly lower than its performance on text-only questions (p<0.05). DISCUSSION AND CONCLUSION This study shows that ChatGPT-4.0 performs better on the oculoplastic subspecialty than prior versions. However, significant challenges remain regarding accuracy, particularly when integrating image-based prompts. While showing promise within medical education, further progress must be made regarding LLM reliability, and caution should be used until further advancement is achieved.
Collapse
Affiliation(s)
- Gurnoor S Gill
- Medical School, Florida Atlantic University Charles E. Schmidt College of Medicine, Boca Raton, USA
| | - Jacob Blair
- Ophthalmology, Larkin Community Hospital (LCH) Lake Erie College of Osteopathic Medicine (LECOM), Miami, USA
| | - Steven Litinsky
- Ophthalmology, Florida Atlantic University Charles E. Schmidt College of Medicine, Boca Raton, USA
| |
Collapse
|
6
|
Giri BR, Jakka D, Sandoval MA, Kulkarni VR, Bao Q. Advancements in Ocular Therapy: A Review of Emerging Drug Delivery Approaches and Pharmaceutical Technologies. Pharmaceutics 2024; 16:1325. [PMID: 39458654 PMCID: PMC11511072 DOI: 10.3390/pharmaceutics16101325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 10/04/2024] [Accepted: 10/07/2024] [Indexed: 10/28/2024] Open
Abstract
Eye disorders affect a substantial portion of the global population, yet the availability of efficacious ophthalmic drug products remains limited. This can be partly ascribed to a number of factors: (1) inadequate understanding of physiological barriers, treatment strategies, drug and polymer properties, and delivery systems; (2) challenges in effectively delivering drugs to the anterior and posterior segments of the eye due to anatomical and physiological constraints; and (3) manufacturing and regulatory hurdles in ocular drug product development. The present review discusses innovative ocular delivery and treatments, encompassing implants, liposomes, nanoparticles, nanomicelles, microparticles, iontophoresis, in situ gels, contact lenses, microneedles, hydrogels, bispecific antibodies, and gene delivery strategies. Furthermore, this review also introduces advanced manufacturing technologies such as 3D printing and hot-melt extrusion (HME), aimed at improving bioavailability, reducing therapeutic dosages and side effects, facilitating the design of personalized ophthalmic dosage forms, as well as enhancing patient compliance. This comprehensive review lastly offers insights into digital healthcare, market trends, and industry and regulatory perspectives pertaining to ocular product development.
Collapse
Affiliation(s)
- Bhupendra Raj Giri
- Division of Molecular Pharmaceutics and Drug Delivery, College of Pharmacy, The University of Texas at Austin, Austin, TX 78712, USA; (B.R.G.); (M.A.S.); (V.R.K.)
| | - Deeksha Jakka
- School of Pharmacy, The University of Mississippi, University, MS 38677, USA;
| | - Michael A. Sandoval
- Division of Molecular Pharmaceutics and Drug Delivery, College of Pharmacy, The University of Texas at Austin, Austin, TX 78712, USA; (B.R.G.); (M.A.S.); (V.R.K.)
| | - Vineet R. Kulkarni
- Division of Molecular Pharmaceutics and Drug Delivery, College of Pharmacy, The University of Texas at Austin, Austin, TX 78712, USA; (B.R.G.); (M.A.S.); (V.R.K.)
| | - Quanying Bao
- Synthetic Product Development, Alexion, AstraZeneca Rare Disease, 101 College Street, New Haven, CT 06510, USA
| |
Collapse
|
7
|
Alsaykhan LK, Maashi MS. A hybrid detection model for acute lymphocytic leukemia using support vector machine and particle swarm optimization (SVM-PSO). Sci Rep 2024; 14:23483. [PMID: 39379598 PMCID: PMC11461623 DOI: 10.1038/s41598-024-74889-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 09/30/2024] [Indexed: 10/10/2024] Open
Abstract
Leukemia, a hematological disease affecting the bone marrow and white blood cells (WBCs), ranks among the top ten causes of mortality worldwide. Delays in decision-making often hinder the timely application of suitable medical treatments. Acute lymphoblastic leukemia (ALL) is one of the primary forms, constituting approximately 25% of childhood cancer cases. However, automated ALL diagnosis is challenging. Recently, machine learning (ML) has emerged as an important tool for building detection models. In this study, we present a hybrid detection model that improves the accuracy of the detection process by combining support vector machine (SVM) and particle swarm optimization (PSO) approaches to automatically identify ALL. We use SVM to represent a two-dimensional image and complete the classification process. PSO is employed to enhance the performance of the SVM model, reducing error rates and enhancing result accuracy. The input images are obtained from two public datasets (ALL-IDB1 and ALL-IDB2), and public online datasets are utilized for training and testing the proposed model. The results indicate that our hybrid SVM-PSO model has high accuracy, outperforming stand-alone algorithms and demonstrating superior performance, an enhanced confusion matrix, and a higher detection rate. This advancement holds promise for enhancing the quality of technical software in the medical field using machine learning.
Collapse
Affiliation(s)
- Lama K Alsaykhan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11451, Kingdom of Saudi Arabia
| | - Mashael S Maashi
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11451, Kingdom of Saudi Arabia.
| |
Collapse
|
8
|
Labib KM, Ghumman H, Jain S, Jarstad JS. A Review of the Utility and Limitations of Artificial Intelligence in Retinal Disorders and Pediatric Ophthalmology. Cureus 2024; 16:e71063. [PMID: 39380780 PMCID: PMC11459419 DOI: 10.7759/cureus.71063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 10/08/2024] [Indexed: 10/10/2024] Open
Abstract
Artificial intelligence (AI) is reshaping ophthalmology by enhancing diagnostic precision and treatment strategies, particularly in retinal disorders and pediatric ophthalmology. This review examines AI's efficacy in diagnosing conditions such as diabetic retinopathy (DR) and age-related macular degeneration (AMD) using imaging techniques, such as optical coherence tomography (OCT) and fundus photography. AI also shows promise in pediatric care, aiding in the screening of retinopathy of prematurity (ROP) and the management of conditions, including pediatric cataracts and strabismus. However, the integration of AI in ophthalmology presents challenges, including ethical concerns regarding algorithm biases, privacy issues, and limitations in data set quality. Addressing these challenges is crucial to ensure AI's responsible and effective deployment in clinical settings. This review synthesizes current research, underscoring AI's transformative potential in ophthalmology while highlighting critical considerations for its ethical use and technological advancement.
Collapse
Affiliation(s)
- Kristie M Labib
- Department of Ophthalmology, University of South Florida Health Morsani College of Medicine, Tampa, USA
| | - Haider Ghumman
- Department of Ophthalmology, University of South Florida Health Morsani College of Medicine, Tampa, USA
| | - Samyak Jain
- Department of Ophthalmology, University of South Florida Health Morsani College of Medicine, Tampa, USA
| | - John S Jarstad
- Department of Ophthalmology, University of South Florida Health Morsani College of Medicine, Tampa, USA
| |
Collapse
|
9
|
Sonmez SC, Sevgi M, Antaki F, Huemer J, Keane PA. Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges. Br J Ophthalmol 2024; 108:1335-1340. [PMID: 38925907 PMCID: PMC11503064 DOI: 10.1136/bjo-2024-325458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024]
Abstract
The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.
Collapse
Affiliation(s)
| | - Mertcan Sevgi
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada
| | - Josef Huemer
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- Department of Ophthalmology and Optometry, Kepler University Hospital, Linz, Austria
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| |
Collapse
|
10
|
Prada AM, Quintero F, Mendoza K, Galvis V, Tello A, Romero LA, Marrugo AG. Assessing Fuchs Corneal Endothelial Dystrophy Using Artificial Intelligence-Derived Morphometric Parameters From Specular Microscopy Images. Cornea 2024; 43:1080-1087. [PMID: 38334475 PMCID: PMC11296282 DOI: 10.1097/ico.0000000000003460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/01/2023] [Accepted: 11/23/2023] [Indexed: 02/10/2024]
Abstract
PURPOSE The aim of this study was to evaluate the efficacy of artificial intelligence-derived morphometric parameters in characterizing Fuchs corneal endothelial dystrophy (FECD) from specular microscopy images. METHODS This cross-sectional study recruited patients diagnosed with FECD, who underwent ophthalmologic evaluations, including slit-lamp examinations and corneal endothelial assessments using specular microscopy. The modified Krachmer grading scale was used for clinical FECD classification. The images were processed using a convolutional neural network for segmentation and morphometric parameter estimation, including effective endothelial cell density, guttae area ratio, coefficient of variation of size, and hexagonality. A mixed-effects model was used to assess relationships between the FECD clinical classification and measured parameters. RESULTS Of 52 patients (104 eyes) recruited, 76 eyes were analyzed because of the exclusion of 26 eyes for poor quality retroillumination photographs. The study revealed significant discrepancies between artificial intelligence-based and built-in microscope software cell density measurements (1322 ± 489 cells/mm 2 vs. 2216 ± 509 cells/mm 2 , P < 0.001). In the central region, guttae area ratio showed the strongest correlation with modified Krachmer grades (0.60, P < 0.001). In peripheral areas, only guttae area ratio in the inferior region exhibited a marginally significant positive correlation (0.29, P < 0.05). CONCLUSIONS This study confirms the utility of CNNs for precise FECD evaluation through specular microscopy. Guttae area ratio emerges as a compelling morphometric parameter aligning closely with modified Krachmer clinical grading. These findings set the stage for future large-scale studies, with potential applications in the assessment of irreversible corneal edema risk after phacoemulsification in FECD patients, as well as in monitoring novel FECD therapies.
Collapse
Affiliation(s)
- Angelica M. Prada
- Centro Oftalmológico Virgilio Galvis, Floridablanca, Colombia
- Fundación Oftalmológica de Santander FOSCAL, Floridablanca, Colombia
- Facultad de Salud, Universidad Autónoma de Bucaramanga UNAB, Bucaramanga, Colombia
| | - Fernando Quintero
- Facultad de Ingeniería, Universidad Tecnológica de Bolívar, Cartagena, Colombia
| | - Kevin Mendoza
- Facultad de Ingeniería, Universidad Tecnológica de Bolívar, Cartagena, Colombia
| | - Virgilio Galvis
- Centro Oftalmológico Virgilio Galvis, Floridablanca, Colombia
- Fundación Oftalmológica de Santander FOSCAL, Floridablanca, Colombia
- Facultad de Salud, Universidad Autónoma de Bucaramanga UNAB, Bucaramanga, Colombia
| | - Alejandro Tello
- Centro Oftalmológico Virgilio Galvis, Floridablanca, Colombia
- Fundación Oftalmológica de Santander FOSCAL, Floridablanca, Colombia
- Facultad de Salud, Universidad Autónoma de Bucaramanga UNAB, Bucaramanga, Colombia
- Facultad de Salud, Universidad Industrial de Santander UIS, Bucaramanga, Colombia; and
| | - Lenny A. Romero
- Facultad de Ciencias Básicas, Universidad Tecnológica de Bolívar, Cartagena, Colombia
| | - Andres G. Marrugo
- Facultad de Ingeniería, Universidad Tecnológica de Bolívar, Cartagena, Colombia
| |
Collapse
|
11
|
Mihalache A, Huang RS, Cruz-Pimentel M, Patil NS, Popovic MM, Pandya BU, Shor R, Pereira A, Muni RH. Artificial intelligence chatbot interpretation of ophthalmic multimodal imaging cases. Eye (Lond) 2024; 38:2491-2493. [PMID: 38649474 PMCID: PMC11383941 DOI: 10.1038/s41433-024-03074-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 03/15/2024] [Accepted: 04/05/2024] [Indexed: 04/25/2024] Open
Affiliation(s)
- Andrew Mihalache
- Temerty School of Medicine, University of Toronto, Toronto, ON, Canada
| | - Ryan S Huang
- Temerty School of Medicine, University of Toronto, Toronto, ON, Canada
| | - Miguel Cruz-Pimentel
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
| | - Nikhil S Patil
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, ON, Canada
| | - Marko M Popovic
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
| | - Bhadra U Pandya
- Temerty School of Medicine, University of Toronto, Toronto, ON, Canada
| | - Reut Shor
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
| | - Austin Pereira
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
| | - Rajeev H Muni
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada.
- Department of Ophthalmology, St. Michael's Hospital/Unity Health Toronto, Toronto, ON, Canada.
| |
Collapse
|
12
|
Li Z, Wang L, Qiang W, Chen K, Wang Z, Zhang Y, Xie H, Wu S, Jiang J, Chen W. DeepMonitoring: a deep learning-based monitoring system for assessing the quality of cornea images captured by smartphones. Front Cell Dev Biol 2024; 12:1447067. [PMID: 39258227 PMCID: PMC11385315 DOI: 10.3389/fcell.2024.1447067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 08/19/2024] [Indexed: 09/12/2024] Open
Abstract
Smartphone-based artificial intelligence (AI) diagnostic systems could assist high-risk patients to self-screen for corneal diseases (e.g., keratitis) instead of detecting them in traditional face-to-face medical practices, enabling the patients to proactively identify their own corneal diseases at an early stage. However, AI diagnostic systems have significantly diminished performance in low-quality images which are unavoidable in real-world environments (especially common in patient-recorded images) due to various factors, hindering the implementation of these systems in clinical practice. Here, we construct a deep learning-based image quality monitoring system (DeepMonitoring) not only to discern low-quality cornea images created by smartphones but also to identify the underlying factors contributing to the generation of such low-quality images, which can guide operators to acquire high-quality images in a timely manner. This system performs well across validation, internal, and external testing sets, with AUCs ranging from 0.984 to 0.999. DeepMonitoring holds the potential to filter out low-quality cornea images produced by smartphones, facilitating the application of smartphone-based AI diagnostic systems in real-world clinical settings, especially in the context of self-screening for corneal diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Lei Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Kuan Chen
- Cangnan Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Yi Zhang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Shanjun Wu
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
13
|
Hoffmann L, Runkel CB, Künzel S, Kabiri P, Rübsam A, Bonaventura T, Marquardt P, Haas V, Biniaminov N, Biniaminov S, Joussen AM, Zeitz O. Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi. J Clin Med 2024; 13:4141. [PMID: 39064181 PMCID: PMC11277885 DOI: 10.3390/jcm13144141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/24/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
Background: This study aimed to evaluate the potential of human-machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity. Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification.
Collapse
Affiliation(s)
- Laura Hoffmann
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Constance B. Runkel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Steffen Künzel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Payam Kabiri
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Anne Rübsam
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Theresa Bonaventura
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | | | | | | | | | - Antonia M. Joussen
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Oliver Zeitz
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| |
Collapse
|
14
|
Muhsin ZJ, Qahwaji R, AlShawabkeh M, AlRyalat SA, Al Bdour M, Al-Taee M. Smart decision support system for keratoconus severity staging using corneal curvature and thinnest pachymetry indices. EYE AND VISION (LONDON, ENGLAND) 2024; 11:28. [PMID: 38978067 PMCID: PMC11229244 DOI: 10.1186/s40662-024-00394-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 06/17/2024] [Indexed: 07/10/2024]
Abstract
BACKGROUND This study proposes a decision support system created in collaboration with machine learning experts and ophthalmologists for detecting keratoconus (KC) severity. The system employs an ensemble machine model and minimal corneal measurements. METHODS A clinical dataset is initially obtained from Pentacam corneal tomography imaging devices, which undergoes pre-processing and addresses imbalanced sampling through the application of an oversampling technique for minority classes. Subsequently, a combination of statistical methods, visual analysis, and expert input is employed to identify Pentacam indices most correlated with severity class labels. These selected features are then utilized to develop and validate three distinct machine learning models. The model exhibiting the most effective classification performance is integrated into a real-world web-based application and deployed on a web application server. This deployment facilitates evaluation of the proposed system, incorporating new data and considering relevant human factors related to the user experience. RESULTS The performance of the developed system is experimentally evaluated, and the results revealed an overall accuracy of 98.62%, precision of 98.70%, recall of 98.62%, F1-score of 98.66%, and F2-score of 98.64%. The application's deployment also demonstrated precise and smooth end-to-end functionality. CONCLUSION The developed decision support system establishes a robust basis for subsequent assessment by ophthalmologists before potential deployment as a screening tool for keratoconus severity detection in a clinical setting.
Collapse
Affiliation(s)
- Zahra J Muhsin
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK.
| | - Rami Qahwaji
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK
| | | | | | - Muawyah Al Bdour
- School of Medicine, The University of Jordan, Amman, 11942, Jordan
| | - Majid Al-Taee
- Department of Computer Science, University of Bradford, Bradford, BD7 1DP, UK
| |
Collapse
|
15
|
Wu JH, Lin S, Moghimi S. Application of artificial intelligence in glaucoma care: An updated review. Taiwan J Ophthalmol 2024; 14:340-351. [PMID: 39430354 PMCID: PMC11488804 DOI: 10.4103/tjo.tjo-d-24-00044] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 06/05/2024] [Indexed: 10/22/2024] Open
Abstract
The application of artificial intelligence (AI) in ophthalmology has been increasingly explored in the past decade. Numerous studies have shown promising results supporting the utility of AI to improve the management of ophthalmic diseases, and glaucoma is of no exception. Glaucoma is an irreversible vision condition with insidious onset, complex pathophysiology, and chronic treatment. Since there remain various challenges in the clinical management of glaucoma, the potential role of AI in facilitating glaucoma care has garnered significant attention. In this study, we reviewed the relevant literature published in recent years that investigated the application of AI in glaucoma management. The main aspects of AI applications that will be discussed include glaucoma risk prediction, glaucoma detection and diagnosis, visual field estimation and pattern analysis, glaucoma progression detection, and other applications.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, California
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York
| | - Shan Lin
- Glaucoma Center of San Francisco, San Francisco, CA, United States
| | - Sasan Moghimi
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, California
| |
Collapse
|
16
|
Alshutayli AAM, Asiri FM, Abutaleb YBA, Alomair BA, Almasaud AK, Almaqhawi A. Assessing Public Knowledge and Acceptance of Using Artificial Intelligence Doctors as a Partial Alternative to Human Doctors in Saudi Arabia: A Cross-Sectional Study. Cureus 2024; 16:e64461. [PMID: 39135842 PMCID: PMC11318498 DOI: 10.7759/cureus.64461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2024] [Indexed: 08/15/2024] Open
Abstract
Objective To assess the public acceptance of using artificial intelligence (AI) doctors to diagnose and treat patients as a partial alternative to human physicians in Saudi Arabia. Methodology An observational cross-sectional study was conducted from January to March 2024. A link to an online questionnaire was distributed through social media applications to citizens and residents aged 18 years and older across various regions in Saudi Arabia. The sample size was calculated using the Raosoft online survey size calculator, which estimated that the minimum sample size should be 385. Results Of the 386 participants surveyed, 85.8% reported being aware of AI, and 47.9% reported having some knowledge about different AI fields in daily life. However, almost one-third (32.9%) reported a lack of knowledge about the use of AI in healthcare. In terms of acceptance, 52.3% of respondents indicated they felt comfortable with the use of AI tools as partial alternatives to human doctors, and 30.8% believed AI is useful in the field of health. The most common concern (63.7%) about the use of AI tools accessible to patients was the difficulty of describing symptoms using these tools. Conclusion The findings of this study provide valuable insights into the public's knowledge and acceptance of AI in medicine within the Saudi Arabian context. Overall, this study underscores the importance of proactively addressing the public's concerns and knowledge gaps regarding AI in healthcare. By fostering greater understanding and acceptance, healthcare stakeholders can better harness the potential of AI to improve patient outcomes and enhance the efficiency of medical services in Saudi Arabia.
Collapse
Affiliation(s)
| | - Faisal M Asiri
- College of Medicine, Prince Sattam Bin Abdulaziz University, Al-Kharj, SAU
| | | | | | | | | |
Collapse
|
17
|
Cohen SA, Fisher AC, Xu BY, Song BJ. Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024; 18:110-116. [PMID: 39575130 PMCID: PMC11576343 DOI: 10.5005/jp-journals-10078-1448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 09/17/2024] [Indexed: 11/24/2024] Open
Abstract
Aim and background Patients are increasingly turning to the internet to learn more about their ocular disease. In this study, we sought (1) to compare the accuracy and readability of Google and ChatGPT responses to patients' glaucoma-related frequently asked questions (FAQs) and (2) to evaluate ChatGPT's capacity to improve glaucoma patient education materials by accurately reducing the grade level at which they are written. Materials and methods We executed a Google search to identify the three most common FAQs related to 10 search terms associated with glaucoma diagnosis and treatment. Each of the 30 FAQs was inputted into both Google and ChatGPT and responses were recorded. The accuracy of responses was evaluated by three glaucoma specialists while readability was assessed using five validated readability indices. Subsequently, ChatGPT was instructed to generate patient education materials at specific reading levels to explain seven glaucoma procedures. The accuracy and readability of procedural explanations were measured. Results ChatGPT responses to glaucoma FAQs were significantly more accurate than Google responses (97 vs 77% accuracy, respectively, p < 0.001). ChatGPT responses were also written at a significantly higher reading level (grade 14.3 vs 9.4, respectively, p < 0.001). When instructed to revise glaucoma procedural explanations to improve understandability, ChatGPT reduced the average reading level of educational materials from grade 16.6 (college level) to grade 9.4 (high school level) (p < 0.001) without reducing the accuracy of procedural explanations. Conclusion ChatGPT is more accurate than Google search when responding to glaucoma patient FAQs. ChatGPT successfully reduced the reading level of glaucoma procedural explanations without sacrificing accuracy, with implications for the future of customized patient education for patients with varying health literacy. Clinical significance Our study demonstrates the utility of ChatGPT for patients seeking information about glaucoma and for physicians when creating unique patient education materials at reading levels that optimize understanding by patients. An enhanced patient understanding of glaucoma may lead to informed decision-making and improve treatment compliance. How to cite this article Cohen SA, Fisher AC, Xu BY, et al. Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024;18(3):110-116.
Collapse
Affiliation(s)
- Samuel A Cohen
- Department of Ophthalmology, UCLA Stein Eye Institute, Los Angeles, California, United States
| | - Ann C Fisher
- Department of Ophthalmology Byers Eye Institute at Stanford, Stanford, California, United States
| | - Benjamin Y Xu
- Department of Ophthalmology, USC Roski Eye Institute, Los Angeles, California, United States
| | - Brian J Song
- Department of Ophthalmology, USC Roski Eye Institute, Los Angeles, California, United States
| |
Collapse
|
18
|
Goodman D, Zhu AY. Utility of artificial intelligence in the diagnosis and management of keratoconus: a systematic review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1380701. [PMID: 38984114 PMCID: PMC11182163 DOI: 10.3389/fopht.2024.1380701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 04/23/2024] [Indexed: 07/11/2024]
Abstract
Introduction The application of artificial intelligence (AI) systems in ophthalmology is rapidly expanding. Early detection and management of keratoconus is important for preventing disease progression and the need for corneal transplant. We review studies regarding the utility of AI in the diagnosis and management of keratoconus and other corneal ectasias. Methods We conducted a systematic search for relevant original, English-language research studies in the PubMed, Web of Science, Embase, and Cochrane databases from inception to October 31, 2023, using a combination of the following keywords: artificial intelligence, deep learning, machine learning, keratoconus, and corneal ectasia. Case reports, literature reviews, conference proceedings, and editorials were excluded. We extracted the following data from each eligible study: type of AI, input used for training, output, ground truth or reference, dataset size, availability of algorithm/model, availability of dataset, and major study findings. Results Ninety-three original research studies were included in this review, with the date of publication ranging from 1994 to 2023. The majority of studies were regarding the use of AI in detecting keratoconus or subclinical keratoconus (n=61). Among studies regarding keratoconus diagnosis, the most common inputs were corneal topography, Scheimpflug-based corneal tomography, and anterior segment-optical coherence tomography. This review also summarized 16 original research studies regarding AI-based assessment of severity and clinical features, 7 studies regarding the prediction of disease progression, and 6 studies regarding the characterization of treatment response. There were only three studies regarding the use of AI in identifying susceptibility genes involved in the etiology and pathogenesis of keratoconus. Discussion Algorithms trained on Scheimpflug-based tomography seem promising tools for the early diagnosis of keratoconus that can be particularly applied in low-resource communities. Future studies could investigate the application of AI models trained on multimodal patient information for staging keratoconus severity and tracking disease progression.
Collapse
|
19
|
Zago Ribeiro L, Nakayama LF, Malerbi FK, Regatieri CVS. Automated machine learning model for fundus image classification by health-care professionals with no coding experience. Sci Rep 2024; 14:10395. [PMID: 38710726 PMCID: PMC11074250 DOI: 10.1038/s41598-024-60807-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024] Open
Abstract
To assess the feasibility of code-free deep learning (CFDL) platforms in the prediction of binary outcomes from fundus images in ophthalmology, evaluating two distinct online-based platforms (Google Vertex and Amazon Rekognition), and two distinct datasets. Two publicly available datasets, Messidor-2 and BRSET, were utilized for model development. The Messidor-2 consists of fundus photographs from diabetic patients and the BRSET is a multi-label dataset. The CFDL platforms were used to create deep learning models, with no preprocessing of the images, by a single ophthalmologist without coding expertise. The performance metrics employed to evaluate the models were F1 score, area under curve (AUC), precision and recall. The performance metrics for referable diabetic retinopathy and macular edema were above 0.9 for both tasks and CDFL. The Google Vertex models demonstrated superior performance compared to the Amazon models, with the BRSET dataset achieving the highest accuracy (AUC of 0.994). Multi-classification tasks using only BRSET achieved similar overall performance between platforms, achieving AUC of 0.994 for laterality, 0.942 for age grouping, 0.779 for genetic sex identification, 0.857 for optic, and 0.837 for normality with Google Vertex. The study demonstrates the feasibility of using automated machine learning platforms for predicting binary outcomes from fundus images in ophthalmology. It highlights the high accuracy achieved by the models in some tasks and the potential of CFDL as an entry-friendly platform for ophthalmologists to familiarize themselves with machine learning concepts.
Collapse
Affiliation(s)
- Lucas Zago Ribeiro
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil.
| | - Luis Filipe Nakayama
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
- Massachusetts Institute of Technology, Institute for Medical Engineering and Science, Cambridge, MA, USA
| | - Fernando Korn Malerbi
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
| | | |
Collapse
|
20
|
Tan YY, Kang HG, Lee CJ, Kim SS, Park S, Thakur S, Da Soh Z, Cho Y, Peng Q, Lee K, Tham YC, Rim TH, Cheng CY. Prognostic potentials of AI in ophthalmology: systemic disease forecasting via retinal imaging. EYE AND VISION (LONDON, ENGLAND) 2024; 11:17. [PMID: 38711111 PMCID: PMC11071258 DOI: 10.1186/s40662-024-00384-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/17/2024] [Indexed: 05/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) that utilizes deep learning (DL) has potential for systemic disease prediction using retinal imaging. The retina's unique features enable non-invasive visualization of the central nervous system and microvascular circulation, aiding early detection and personalized treatment plans for personalized care. This review explores the value of retinal assessment, AI-based retinal biomarkers, and the importance of longitudinal prediction models in personalized care. MAIN TEXT This narrative review extensively surveys the literature for relevant studies in PubMed and Google Scholar, investigating the application of AI-based retina biomarkers in predicting systemic diseases using retinal fundus photography. The study settings, sample sizes, utilized AI models and corresponding results were extracted and analysed. This review highlights the substantial potential of AI-based retinal biomarkers in predicting neurodegenerative, cardiovascular, and chronic kidney diseases. Notably, DL algorithms have demonstrated effectiveness in identifying retinal image features associated with cognitive decline, dementia, Parkinson's disease, and cardiovascular risk factors. Furthermore, longitudinal prediction models leveraging retinal images have shown potential in continuous disease risk assessment and early detection. AI-based retinal biomarkers are non-invasive, accurate, and efficient for disease forecasting and personalized care. CONCLUSION AI-based retinal imaging hold promise in transforming primary care and systemic disease management. Together, the retina's unique features and the power of AI enable early detection, risk stratification, and help revolutionizing disease management plans. However, to fully realize the potential of AI in this domain, further research and validation in real-world settings are essential.
Collapse
Affiliation(s)
| | - Hyun Goo Kang
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Chan Joo Lee
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sung Soo Kim
- Division of Retina, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sungha Park
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Yunnie Cho
- Mediwhale Inc, Seoul, Republic of Korea
- Department of Education and Human Resource Development, Seoul National University Hospital, Seoul, South Korea
| | - Qingsheng Peng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Kwanghyun Lee
- Department of Ophthalmology, National Health Insurance Service Ilsan Hospital, Goyang, Republic of Korea
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Mediwhale Inc, Seoul, Republic of Korea.
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Singapore, Singapore
| |
Collapse
|
21
|
Tey KY, Cheong EZK, Ang M. Potential applications of artificial intelligence in image analysis in cornea diseases: a review. EYE AND VISION (LONDON, ENGLAND) 2024; 11:10. [PMID: 38448961 PMCID: PMC10919022 DOI: 10.1186/s40662-024-00376-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 02/09/2024] [Indexed: 03/08/2024]
Abstract
Artificial intelligence (AI) is an emerging field which could make an intelligent healthcare model a reality and has been garnering traction in the field of medicine, with promising results. There have been recent developments in machine learning and/or deep learning algorithms for applications in ophthalmology-primarily for diabetic retinopathy, and age-related macular degeneration. However, AI research in the field of cornea diseases is relatively new. Algorithms have been described to assist clinicians in diagnosis or detection of cornea conditions such as keratoconus, infectious keratitis and dry eye disease. AI may also be used for segmentation and analysis of cornea imaging or tomography as an adjunctive tool. Despite the potential advantages that these new technologies offer, there are challenges that need to be addressed before they can be integrated into clinical practice. In this review, we aim to summarize current literature and provide an update regarding recent advances in AI technologies pertaining to corneal diseases, and its potential future application, in particular pertaining to image analysis.
Collapse
Affiliation(s)
- Kai Yuan Tey
- Singapore National Eye Centre, 11 Third Hospital Ave, Singapore, 168751, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Marcus Ang
- Singapore National Eye Centre, 11 Third Hospital Ave, Singapore, 168751, Singapore.
- Singapore Eye Research Institute, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
22
|
Abascal Azanza C, Barrio-Barrio J, Ramos Cejudo J, Ybarra Arróspide B, Devoto MH. Development and validation of a convolutional neural network to identify blepharoptosis. Sci Rep 2023; 13:17585. [PMID: 37845333 PMCID: PMC10579403 DOI: 10.1038/s41598-023-44686-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 10/11/2023] [Indexed: 10/18/2023] Open
Abstract
Blepharoptosis is a recognized cause of reversible vision loss and a non-specific indicator of neurological issues, occasionally heralding life-threatening conditions. Currently, diagnosis relies on human expertise and eyelid examination, with most existing Artificial Intelligence algorithms focusing on eyelid positioning under specialized settings. This study introduces a deep learning model with convolutional neural networks to detect blepharoptosis in more realistic conditions. Our model was trained and tested using high quality periocular images from patients with blepharoptosis as well as those with other eyelid conditions. The model achieved an area under the receiver operating characteristic curve of 0.918. For validation, we compared the model's performance against nine medical experts-oculoplastic surgeons, general ophthalmologists, and general practitioners-with varied expertise. When tested on a new dataset with varied image quality, the model's performance remained statistically comparable to that of human graders. Our findings underscore the potential to enhance telemedicine services for blepharoptosis detection.
Collapse
Affiliation(s)
- Cristina Abascal Azanza
- Department of Ophthalmology, Navarra Institute for Health Research (IdiSNA), Clínica Universidad de Navarra, Av. de Pío XII, 36, 31008, Pamplona, Navarra, Spain
| | - Jesús Barrio-Barrio
- Department of Ophthalmology, Navarra Institute for Health Research (IdiSNA), Clínica Universidad de Navarra, Av. de Pío XII, 36, 31008, Pamplona, Navarra, Spain.
- Faculty of Medicine, Universidad de Navarra, Pamplona, Spain.
| | | | | | | |
Collapse
|
23
|
Madadi Y, Delsoz M, Lao PA, Fong JW, Hollingsworth TJ, Kahook MY, Yousefi S. ChatGPT Assisting Diagnosis of Neuro-ophthalmology Diseases Based on Case Reports. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.09.13.23295508. [PMID: 37781591 PMCID: PMC10540811 DOI: 10.1101/2023.09.13.23295508] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
Purpose To evaluate the efficiency of large language models (LLMs) including ChatGPT to assist in diagnosing neuro-ophthalmic diseases based on case reports. Design Prospective study. Subjects or Participants We selected 22 different case reports of neuro-ophthalmic diseases from a publicly available online database. These cases included a wide range of chronic and acute diseases that are commonly seen by neuro-ophthalmic sub-specialists. Methods We inserted the text from each case as a new prompt into both ChatGPT v3.5 and ChatGPT Plus v4.0 and asked for the most probable diagnosis. We then presented the exact information to two neuro-ophthalmologists and recorded their diagnoses followed by comparison to responses from both versions of ChatGPT. Main Outcome Measures Diagnostic accuracy in terms of number of correctly diagnosed cases among diagnoses. Results ChatGPT v3.5, ChatGPT Plus v4.0, and the two neuro-ophthalmologists were correct in 13 (59%), 18 (82%), 19 (86%), and 19 (86%) out of 22 cases, respectively. The agreement between the various diagnostic sources were as follows: ChatGPT v3.5 and ChatGPT Plus v4.0, 13 (59%); ChatGPT v3.5 and the first neuro-ophthalmologist, 12 (55%); ChatGPT v3.5 and the second neuro-ophthalmologist, 12 (55%); ChatGPT Plus v4.0 and the first neuro-ophthalmologist, 17 (77%); ChatGPT Plus v4.0 and the second neuro-ophthalmologist, 16 (73%); and first and second neuro-ophthalmologists 17 (17%). Conclusions The accuracy of ChatGPT v3.5 and ChatGPT Plus v4.0 in diagnosing patients with neuro-ophthalmic diseases was 59% and 82%, respectively. With further development, ChatGPT Plus v4.0 may have potential to be used in clinical care settings to assist clinicians in providing quick, accurate diagnoses of patients in neuro-ophthalmology. The applicability of using LLMs like ChatGPT in clinical settings that lack access to subspeciality trained neuro-ophthalmologists deserves further research.
Collapse
Affiliation(s)
- Yeganeh Madadi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Mohammad Delsoz
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Priscilla A. Lao
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Joseph W. Fong
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - TJ Hollingsworth
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Malik Y. Kahook
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, CO, USA
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|