1
|
Nisanova A, Yavary A, Deaner J, Ali FS, Gogte P, Kaplan R, Chen KC, Nudleman E, Grewal D, Gupta M, Wolfe J, Klufas M, Yiu G, Soltani I, Emami-Naeini P. Performance of Automated Machine Learning in Predicting Outcomes of Pneumatic Retinopexy. OPHTHALMOLOGY SCIENCE 2024; 4:100470. [PMID: 38827487 PMCID: PMC11141253 DOI: 10.1016/j.xops.2024.100470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/30/2023] [Accepted: 01/12/2024] [Indexed: 06/04/2024]
Abstract
Purpose Automated machine learning (AutoML) has emerged as a novel tool for medical professionals lacking coding experience, enabling them to develop predictive models for treatment outcomes. This study evaluated the performance of AutoML tools in developing models predicting the success of pneumatic retinopexy (PR) in treatment of rhegmatogenous retinal detachment (RRD). These models were then compared with custom models created by machine learning (ML) experts. Design Retrospective multicenter study. Participants Five hundred and thirty nine consecutive patients with primary RRD that underwent PR by a vitreoretinal fellow at 6 training hospitals between 2002 and 2022. Methods We used 2 AutoML platforms: MATLAB Classification Learner and Google Cloud AutoML. Additional models were developed by computer scientists. We included patient demographics and baseline characteristics, including lens and macula status, RRD size, number and location of breaks, presence of vitreous hemorrhage and lattice degeneration, and physicians' experience. The dataset was split into a training (n = 483) and test set (n = 56). The training set, with a 2:1 success-to-failure ratio, was used to train the MATLAB models. Because Google Cloud AutoML requires a minimum of 1000 samples, the training set was tripled to create a new set with 1449 datapoints. Additionally, balanced datasets with a 1:1 success-to-failure ratio were created using Python. Main Outcome Measures Single-procedure anatomic success rate, as predicted by the ML models. F2 scores and area under the receiver operating curve (AUROC) were used as primary metrics to compare models. Results The best performing AutoML model (F2 score: 0.85; AUROC: 0.90; MATLAB), showed comparable performance to the custom model (0.92, 0.86) when trained on the balanced datasets. However, training the AutoML model with imbalanced data yielded misleadingly high AUROC (0.81) despite low F2-score (0.2) and sensitivity (0.17). Conclusions We demonstrated the feasibility of using AutoML as an accessible tool for medical professionals to develop models from clinical data. Such models can ultimately aid in the clinical decision-making, contributing to better patient outcomes. However, outcomes can be misleading or unreliable if used naively. Limitations exist, particularly if datasets contain missing variables or are highly imbalanced. Proper model selection and data preprocessing can improve the reliability of AutoML tools. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Arina Nisanova
- School of Medicine, University of California Davis, Davis, California
| | - Arefeh Yavary
- Department of Computer Science, University of California Davis, Davis, California
| | - Jordan Deaner
- Mid Atlantic Retina, Wills Eye Hospital, Philadelphia, Pennsylvania
| | | | | | - Richard Kaplan
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York
| | | | - Eric Nudleman
- Shiley Eye Center, University of California San Diego, La Jolla, California
| | | | - Meenakashi Gupta
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York
| | - Jeremy Wolfe
- Associated Retinal Consultants, Royal Oak, Michigan
| | - Michael Klufas
- Wills Eye Hospital, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Glenn Yiu
- Tschannen Eye Institute, University of California Davis, Sacramento, California
| | - Iman Soltani
- Department of Mechanical and Aerospace Engineering, University of California Davis, Davis, California
| | - Parisa Emami-Naeini
- Tschannen Eye Institute, University of California Davis, Sacramento, California
| |
Collapse
|
2
|
Ran AR, Wang X, Chan PP, Wong MOM, Yuen H, Lam NM, Chan NCY, Yip WWK, Young AL, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Wong TY, Pang CP, Heng PA, Tham CC, Cheung CY. Developing a privacy-preserving deep learning model for glaucoma detection: a multicentre study with federated learning. Br J Ophthalmol 2024; 108:1114-1123. [PMID: 37857452 DOI: 10.1136/bjo-2023-324188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/23/2023] [Indexed: 10/21/2023]
Abstract
BACKGROUND Deep learning (DL) is promising to detect glaucoma. However, patients' privacy and data security are major concerns when pooling all data for model development. We developed a privacy-preserving DL model using the federated learning (FL) paradigm to detect glaucoma from optical coherence tomography (OCT) images. METHODS This is a multicentre study. The FL paradigm consisted of a 'central server' and seven eye centres in Hong Kong, the USA and Singapore. Each centre first trained a model locally with its own OCT optic disc volumetric dataset and then uploaded its model parameters to the central server. The central server used FedProx algorithm to aggregate all centres' model parameters. Subsequently, the aggregated parameters are redistributed to each centre for its local model optimisation. We experimented with three three-dimensional (3D) networks to evaluate the stabilities of the FL paradigm. Lastly, we tested the FL model on two prospectively collected unseen datasets. RESULTS We used 9326 volumetric OCT scans from 2785 subjects. The FL model performed consistently well with different networks in 7 centres (accuracies 78.3%-98.5%, 75.9%-97.0%, and 78.3%-97.5%, respectively) and stably in the 2 unseen datasets (accuracies 84.8%-87.7%, 81.3%-84.8%, and 86.0%-87.8%, respectively). The FL model achieved non-inferior performance in classifying glaucoma compared with the traditional model and significantly outperformed the individual models. CONCLUSION The 3D FL model could leverage all the datasets and achieve generalisable performance, without data exchange across centres. This study demonstrated an OCT-based FL paradigm for glaucoma identification with ensured patient privacy and data security, charting another course toward the real-world transition of artificial intelligence in ophthalmology.
Collapse
Affiliation(s)
- An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xi Wang
- Zhejiang Lab, Hangzhou, China
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California, USA
| | - Poemen P Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | | | - Hunter Yuen
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Nai Man Lam
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Noel C Y Chan
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | - Wilson W K Yip
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | - Alvin L Young
- Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong SAR, China
- Ophthalmology and Visual Sciences, Alice Ho Miu Ling Nethersole Hospital, Hong Kong SAR, China
| | | | - Robert T Chang
- Ophthalmology, Stanford University School of Medicine, Stanford, California, USA
| | - Suria S Mannil
- Ophthalmology, Stanford University School of Medicine, Stanford, California, USA
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore Medical School, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Chi Pui Pang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong SAR, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
3
|
Gim N, Wu Y, Blazes M, Lee CS, Wang RK, Lee AY. A Clinician's Guide to Sharing Data for AI in Ophthalmology. Invest Ophthalmol Vis Sci 2024; 65:21. [PMID: 38864811 PMCID: PMC11174091 DOI: 10.1167/iovs.65.6.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 05/17/2024] [Indexed: 06/13/2024] Open
Abstract
Data is the cornerstone of using AI models, because their performance directly depends on the diversity, quantity, and quality of the data used for training. Using AI presents unique potential, particularly in medical applications that involve rich data such as ophthalmology, encompassing a variety of imaging methods, medical records, and eye-tracking data. However, sharing medical data comes with challenges because of regulatory issues and privacy concerns. This review explores traditional and nontraditional data sharing methods in medicine, focusing on previous works in ophthalmology. Traditional methods involve direct data transfer, whereas newer approaches prioritize security and privacy by sharing derived datasets, creating secure research environments, or using model-to-data strategies. We examine each method's mechanisms, variations, recent applications in ophthalmology, and their respective advantages and disadvantages. By empowering medical researchers with insights into data sharing methods and considerations, this review aims to assist informed decision-making while upholding ethical standards and patient privacy in medical AI development.
Collapse
Affiliation(s)
- Nayoon Gim
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
- Department of Bioengineering, University of Washington, Seattle, WA, United States
| | - Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| | - Ruikang K. Wang
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- Department of Bioengineering, University of Washington, Seattle, WA, United States
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
- The Roger and Angie Karalis Retina Center, Seattle, Washington, United States
| |
Collapse
|
4
|
Coyner AS, Murickan T, Oh MA, Young BK, Ostmo SR, Singh P, Chan RVP, Moshfeghi DM, Shah PK, Venkatapathy N, Chiang MF, Kalpathy-Cramer J, Campbell JP. Multinational External Validation of Autonomous Retinopathy of Prematurity Screening. JAMA Ophthalmol 2024; 142:327-335. [PMID: 38451496 PMCID: PMC10921347 DOI: 10.1001/jamaophthalmol.2024.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/15/2023] [Indexed: 03/08/2024]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening. Objective To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP. Design, Setting, and Participants This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023. Exposures An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine. Main Outcomes and Measures The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels. Results The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis. Conclusions and Relevance Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Tom Murickan
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Susan R. Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Ophthalmology, University of Colorado School of Medicine, Aurora
| | - R. V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Darius M. Moshfeghi
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | | | | |
Collapse
|
5
|
Teo ZL, Jin L, Li S, Miao D, Zhang X, Ng WY, Tan TF, Lee DM, Chua KJ, Heng J, Liu Y, Goh RSM, Ting DSW. Federated machine learning in healthcare: A systematic review on clinical applications and technical architecture. Cell Rep Med 2024; 5:101419. [PMID: 38340728 PMCID: PMC10897620 DOI: 10.1016/j.xcrm.2024.101419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/17/2023] [Accepted: 01/18/2024] [Indexed: 02/12/2024]
Abstract
Federated learning (FL) is a distributed machine learning framework that is gaining traction in view of increasing health data privacy protection needs. By conducting a systematic review of FL applications in healthcare, we identify relevant articles in scientific, engineering, and medical journals in English up to August 31st, 2023. Out of a total of 22,693 articles under review, 612 articles are included in the final analysis. The majority of articles are proof-of-concepts studies, and only 5.2% are studies with real-life application of FL. Radiology and internal medicine are the most common specialties involved in FL. FL is robust to a variety of machine learning models and data types, with neural networks and medical imaging being the most common, respectively. We highlight the need to address the barriers to clinical translation and to assess its real-world impact in this new digital data-driven healthcare scene.
Collapse
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore
| | - Liyuan Jin
- Singapore Eye Research Institute, Singapore, Singapore; Duke-NUS Medical School, Singapore, Singapore
| | - Siqi Li
- Singapore Eye Research Institute, Singapore, Singapore; Duke-NUS Medical School, Singapore, Singapore
| | - Di Miao
- Singapore Eye Research Institute, Singapore, Singapore; Duke-NUS Medical School, Singapore, Singapore
| | - Xiaoman Zhang
- Singapore Eye Research Institute, Singapore, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore
| | - Ting Fang Tan
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore
| | - Deborah Meixuan Lee
- Singapore Eye Research Institute, Singapore, Singapore; Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore, Singapore
| | - Kai Jie Chua
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore
| | - John Heng
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Liu
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Rick Siow Mong Goh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore, Singapore; Singapore Eye Research Institute, Singapore, Singapore; Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
6
|
Li D, Ran AR, Cheung CY, Prince JL. Deep learning in optical coherence tomography: Where are the gaps? Clin Exp Ophthalmol 2023; 51:853-863. [PMID: 37245525 PMCID: PMC10825778 DOI: 10.1111/ceo.14258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 05/30/2023]
Abstract
Optical coherence tomography (OCT) is a non-invasive optical imaging modality, which provides rapid, high-resolution and cross-sectional morphology of macular area and optic nerve head for diagnosis and managing of different eye diseases. However, interpreting OCT images requires experts in both OCT images and eye diseases since many factors such as artefacts and concomitant diseases can affect the accuracy of quantitative measurements made by post-processing algorithms. Currently, there is a growing interest in applying deep learning (DL) methods to analyse OCT images automatically. This review summarises the trends in DL-based OCT image analysis in ophthalmology, discusses the current gaps, and provides potential research directions. DL in OCT analysis shows promising performance in several tasks: (1) layers and features segmentation and quantification; (2) disease classification; (3) disease progression and prognosis; and (4) referral triage level prediction. Different studies and trends in the development of DL-based OCT image analysis are described and the following challenges are identified and described: (1) public OCT data are scarce and scattered; (2) models show performance discrepancies in real-world settings; (3) models lack of transparency; (4) there is a lack of societal acceptance and regulatory standards; and (5) OCT is still not widely available in underprivileged areas. More work is needed to tackle the challenges and gaps, before DL is further applied in OCT image analysis for clinical use.
Collapse
Affiliation(s)
- Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
7
|
Gholami S, Lim JI, Leng T, Ong SSY, Thompson AC, Alam MN. Federated learning for diagnosis of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1259017. [PMID: 37901412 PMCID: PMC10613107 DOI: 10.3389/fmed.2023.1259017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.
Collapse
Affiliation(s)
- Sina Gholami
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Theodore Leng
- Department of Ophthalmology, School of Medicine, Stanford University, Stanford, CA, United States
| | - Sally Shin Yee Ong
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Atalie Carina Thompson
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Minhaj Nur Alam
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| |
Collapse
|
8
|
Tan TF, Thirunavukarasu AJ, Jin L, Lim J, Poh S, Teo ZL, Ang M, Chan RVP, Ong J, Turner A, Karlström J, Wong TY, Stern J, Ting DSW. Artificial intelligence and digital health in global eye health: opportunities and challenges. Lancet Glob Health 2023; 11:e1432-e1443. [PMID: 37591589 DOI: 10.1016/s2214-109x(23)00323-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 06/26/2023] [Accepted: 07/04/2023] [Indexed: 08/19/2023]
Abstract
Global eye health is defined as the degree to which vision, ocular health, and function are maximised worldwide, thereby optimising overall wellbeing and quality of life. Improving eye health is a global priority as a key to unlocking human potential by reducing the morbidity burden of disease, increasing productivity, and supporting access to education. Although extraordinary progress fuelled by global eye health initiatives has been made over the last decade, there remain substantial challenges impeding further progress. The accelerated development of digital health and artificial intelligence (AI) applications provides an opportunity to transform eye health, from facilitating and increasing access to eye care to supporting clinical decision making with an objective, data-driven approach. Here, we explore the opportunities and challenges presented by digital health and AI in global eye health and describe how these technologies could be leveraged to improve global eye health. AI, telehealth, and emerging technologies have great potential, but require specific work to overcome barriers to implementation. We suggest that a global digital eye health task force could facilitate coordination of funding, infrastructural development, and democratisation of AI and digital health to drive progress forwards in this domain.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Arun J Thirunavukarasu
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Corpus Christi College, University of Cambridge, Cambridge, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - Joshua Lim
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Stanley Poh
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois College of Medicine, Urbana-Champaign, IL, USA
| | - Jasmine Ong
- Pharmacy Department, Singapore General Hospital, Singapore
| | - Angus Turner
- Lions Eye Institute, University of Western Australia, Nedlands, WA, Australia
| | - Jonas Karlström
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jude Stern
- The International Agency for the Prevention of Blindness, London, UK
| | - Daniel Shu-Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
9
|
Matta S, Hassine MB, Lecat C, Borderie L, Guilcher AL, Massin P, Cochener B, Lamard M, Quellec G. Federated Learning for Diabetic Retinopathy Detection in a Multi-center Fundus Screening Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082571 DOI: 10.1109/embc40787.2023.10340772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Federated learning (FL) is a machine learning framework that allows remote clients to collaboratively learn a global model while keeping their training data localized. It has emerged as an effective tool to solve the problem of data privacy protection. In particular, in the medical field, it is gaining relevance for achieving collaborative learning while protecting sensitive data. In this work, we demonstrate the feasibility of FL in the development of a deep learning model for screening diabetic retinopathy (DR) in fundus photographs. To this end, we conduct a simulated FL framework using nearly 700,000 fundus photographs collected from OPHDIAT, a French multi-center screening network for detecting DR. We develop two FL algorithms: 1) a cross-center FL algorithm using data distributed across the OPHDIAT centers and 2) a cross-grader FL algorithm using data distributed across the OPHDIAT graders. We explore and assess different FL strategies and compare them to a conventional learning algorithm, namely centralized learning (CL), where all the data is stored in a centralized repository. For the task of referable DR detection, our simulated FL algorithms achieved similar performance to CL, in terms of area under the ROC curve (AUC): AUC =0.9482 for CL, AUC = 0.9317 for cross-center FL and AUC = 0.9522 for cross-grader FL. Our work indicates that the FL algorithm is a viable and reliable framework that can be applied in a screening network.Clinical relevance- Given that data sharing is regarded as an essential component of modern medical research, achieving collaborative learning while protecting sensitive data is key.
Collapse
|
10
|
Li Y, Yip MYT, Ting DSW, Ang M. Artificial intelligence and digital solutions for myopia. Taiwan J Ophthalmol 2023; 13:142-150. [PMID: 37484621 PMCID: PMC10361438 DOI: 10.4103/tjo.tjo-d-23-00032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 03/16/2023] [Indexed: 07/25/2023] Open
Abstract
Myopia as an uncorrected visual impairment is recognized as a global public health issue with an increasing burden on health-care systems. Moreover, high myopia increases one's risk of developing pathologic myopia, which can lead to irreversible visual impairment. Thus, increased resources are needed for the early identification of complications, timely intervention to prevent myopia progression, and treatment of complications. Emerging artificial intelligence (AI) and digital technologies may have the potential to tackle these unmet needs through automated detection for screening and risk stratification, individualized prediction, and prognostication of myopia progression. AI applications in myopia for children and adults have been developed for the detection, diagnosis, and prediction of progression. Novel AI technologies, including multimodal AI, explainable AI, federated learning, automated machine learning, and blockchain, may further improve prediction performance, safety, accessibility, and also circumvent concerns of explainability. Digital technology advancements include digital therapeutics, self-monitoring devices, virtual reality or augmented reality technology, and wearable devices - which provide possible avenues for monitoring myopia progression and control. However, there are challenges in the implementation of these technologies, which include requirements for specific infrastructure and resources, demonstrating clinically acceptable performance and safety of data management. Nonetheless, this remains an evolving field with the potential to address the growing global burden of myopia.
Collapse
Affiliation(s)
- Yong Li
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Michelle Y. T. Yip
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| |
Collapse
|
11
|
Gupta S, Kumar S, Chang K, Lu C, Singh P, Kalpathy-Cramer J. Collaborative Privacy-preserving Approaches for Distributed Deep Learning Using Multi-Institutional Data. Radiographics 2023; 43:e220107. [PMID: 36862082 PMCID: PMC10091220 DOI: 10.1148/rg.220107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 08/04/2022] [Accepted: 08/09/2022] [Indexed: 03/03/2023]
Abstract
Deep learning (DL) algorithms have shown remarkable potential in automating various tasks in medical imaging and radiologic reporting. However, models trained on low quantities of data or only using data from a single institution often are not generalizable to other institutions, which may have different patient demographics or data acquisition characteristics. Therefore, training DL algorithms using data from multiple institutions is crucial to improving the robustness and generalizability of clinically useful DL models. In the context of medical data, simply pooling data from each institution to a central location to train a model poses several issues such as increased risk to patient privacy, increased costs for data storage and transfer, and regulatory challenges. These challenges of centrally hosting data have motivated the development of distributed machine learning techniques and frameworks for collaborative learning that facilitate the training of DL models without the need to explicitly share private medical data. The authors describe several popular methods for collaborative training and review the main considerations for deploying these models. They also highlight publicly available software frameworks for federated learning and showcase several real-world examples of collaborative learning. The authors conclude by discussing some key challenges and future research directions for distributed DL. They aim to introduce clinicians to the benefits, limitations, and risks of using distributed DL for the development of medical artificial intelligence algorithms. ©RSNA, 2023 Quiz questions for this article are available in the supplemental material.
Collapse
Affiliation(s)
| | | | - Ken Chang
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
13th Street, Building 149, Room 2301, Charlestown, MA 02129 (S.G., S.K., K.C.,
C.L., P.S., J.K.C.); and Indian Institute of Technology Delhi, New Delhi, India
(S.G., S.K.)
| | - Charles Lu
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
13th Street, Building 149, Room 2301, Charlestown, MA 02129 (S.G., S.K., K.C.,
C.L., P.S., J.K.C.); and Indian Institute of Technology Delhi, New Delhi, India
(S.G., S.K.)
| | - Praveer Singh
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
13th Street, Building 149, Room 2301, Charlestown, MA 02129 (S.G., S.K., K.C.,
C.L., P.S., J.K.C.); and Indian Institute of Technology Delhi, New Delhi, India
(S.G., S.K.)
| | - Jayashree Kalpathy-Cramer
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
13th Street, Building 149, Room 2301, Charlestown, MA 02129 (S.G., S.K., K.C.,
C.L., P.S., J.K.C.); and Indian Institute of Technology Delhi, New Delhi, India
(S.G., S.K.)
| |
Collapse
|
12
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
13
|
Teo ZL, Lee AY, Campbell P, Chan RVP, Ting DSW. Developments in Artificial Intelligence for Ophthalmology: Federated Learning. Asia Pac J Ophthalmol (Phila) 2022; 11:500-502. [PMID: 36417673 DOI: 10.1097/apo.0000000000000582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 10/04/2022] [Indexed: 11/24/2022] Open
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
| | - Aaron Y Lee
- Department of Ophthalmology, US Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA
| | - Peter Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, OR
| | - R V Paul Chan
- Department of Ophthalmology, University of Illinois Chicago, Chicago, IL
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
14
|
Federated Learning in Ophthalmology: Retinopathy of Prematurity. Ophthalmol Retina 2022; 6:647-649. [PMID: 35933119 DOI: 10.1016/j.oret.2022.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 03/18/2022] [Indexed: 11/21/2022]
|
15
|
Federated learning for multi-center collaboration in ophthalmology: implications for clinical diagnosis and disease epidemiology. Ophthalmol Retina 2022; 6:650-656. [PMID: 35304305 PMCID: PMC9357070 DOI: 10.1016/j.oret.2022.03.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/10/2022] [Accepted: 03/04/2022] [Indexed: 11/24/2022]
Abstract
OBJECTIVE OR PURPOSE To utilize a deep learning (DL) model trained via federated learning (FL), a method of collaborative training without sharing patient data, to delineate institutional differences in clinician diagnostic paradigms and disease epidemiology in retinopathy of prematurity (ROP). DESIGN Evaluation of a diagnostic test or technology SUBJECTS, PARTICIPANTS, AND/OR CONTROLS: 5,245 patients with wide-angle retinal imaging from the neonatal intensive care units of 7 institutions as part of the Imaging and Informatics in ROP (i-ROP) study. Images were labeled with the clinical diagnosis of plus disease (plus, pre-plus, no plus) that was documented in the chart, and a reference standard diagnosis (RSD) determined by three image-based ROP graders and the clinical diagnosis. METHODS, INTERVENTION OR TESTING Demographics (birthweight [BW], gestational age [GA]), and clinical diagnoses for all eye exams were recorded from each institution. Using a FL approach, a DL model for plus disease classification was trained using only the clinical labels. The three class probabilities were then converted into a vascular severity score (VSS) for each eye exam, as well as an "institutional VSS" in which the average of the VSS values assigned to patients' higher severity ("worse") eyes at each exam was calculated for each institution. MAIN OUTCOME MEASURES We compared demographics, clinical diagnosis of plus disease, and institutional VSS between institutions using the McNemar Bowker test, two-proportion Z test and one-way ANOVA with post-hoc analysis by Tukey-Kramer test. Single regression analysis was performed to explore the relationship between demographics and VSS. RESULTS We found that the proportion of patients diagnosed with pre-plus disease varied significantly between institutions (p<0.00l). Using the DL-derived VSS trained on the data from all institutions using FL, we observed differences in the institutional VSS, as well as level of vascular severity diagnosed as no plus (p<0.001) across institutions. A significant, inverse relationship between the institutional VSS and the mean GA was found (p=0.049, adjusted R2=0.49). CONCLUSIONS A DL-derived ROP VSS developed without sharing data between institutions using FL identified differences in the clinical diagnosis of plus disease, and overall levels of ROP severity between institutions. FL may represent a method to standardize clinical diagnosis and provide objective measurement of disease for image-based diseases.
Collapse
|