1
|
Chen JS, Marra KV, Robles-Holmes HK, Ly KB, Miller J, Wei G, Aguilar E, Bucher F, Ideguchi Y, Coyner AS, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100338. [PMID: 37869029 PMCID: PMC10585474 DOI: 10.1016/j.xops.2023.100338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/01/2023] [Accepted: 05/19/2023] [Indexed: 10/24/2023]
Abstract
Objective To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. Design Development and validation of GAN. Subjects Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. Methods Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. Main Outcome Measures Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). Results The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. Conclusions GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kyle V. Marra
- Molecular Medicine, the Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Joseph Miller
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Felicitas Bucher
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Yoichi Ideguchi
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Aaron S. Coyner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Napoleone Ferrara
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| |
Collapse
|
2
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
3
|
Subramaniam A, Orge F, Douglass M, Can B, Monteoliva G, Fried E, Schbib V, Saidman G, Peña B, Ulacia S, Acevedo P, Rollins AM, Wilson DL. Image harmonization and deep learning automated classification of plus disease in retinopathy of prematurity. J Med Imaging (Bellingham) 2023; 10:061107. [PMID: 37794884 PMCID: PMC10546198 DOI: 10.1117/1.jmi.10.6.061107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/25/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose Retinopathy of prematurity (ROP) is a retinal vascular disease affecting premature infants that can culminate in blindness within days if not monitored and treated. A disease stage for scrutiny and administration of treatment within ROP is "plus disease" characterized by increased tortuosity and dilation of posterior retinal blood vessels. The monitoring of ROP occurs via routine imaging, typically using expensive instruments ($50 to $140 K) that are unavailable in low-resource settings at the point of care. Approach As part of the smartphone-ROP program to enable referrals to expert physicians, fundus images are acquired using smartphone cameras and inexpensive lenses. We developed methods for artificial intelligence determination of plus disease, consisting of a preprocessing pipeline to enhance vessels and harmonize images followed by deep learning classification. A deep learning binary classifier (plus disease versus no plus disease) was developed using GoogLeNet. Results Vessel contrast was enhanced by 90% after preprocessing as assessed by the contrast improvement index. In an image quality evaluation, preprocessed and original images were evaluated by pediatric ophthalmologists from the US and South America with years of experience diagnosing ROP and plus disease. All participating ophthalmologists agreed or strongly agreed that vessel visibility was improved with preprocessing. Using images from various smartphones, harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an area under the ROC curve of 0.9754 for plus disease on a limited dataset. Conclusions Promising results indicate the potential for developing algorithms and software to facilitate the usage of cell phone images for staging of plus disease.
Collapse
Affiliation(s)
- Ananya Subramaniam
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Faruk Orge
- Case Medical Center University Hospitals, Department of Ophthalmology, Cleveland, Ohio, United States
| | - Michael Douglass
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Basak Can
- Case Medical Center University Hospitals, Department of Ophthalmology, Cleveland, Ohio, United States
| | | | - Evelin Fried
- Hospital Italiano de San Justo Agustin Rocca, Buenos Aires, Argentina
| | - Vanina Schbib
- Hospital de Niños Sor Maria Ludovica, Buenos Aires, Argentina
| | | | - Brenda Peña
- Centro Integral de Salud Visual Daponte, Buenos Aires, Argentina
| | - Soledad Ulacia
- Mineserio de Salud Argentina, Ministry of Public Works Building, Buenos Aires, Argentina
| | | | - Andrew M. Rollins
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - David L. Wilson
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
- Case Western Reserve University, Department of Radiology, Cleveland, Ohio, United States
| |
Collapse
|
4
|
Rao DP, Savoy FM, Tan JZE, Fung BPE, Bopitiya CM, Sivaraman A, Vinekar A. Development and validation of an artificial intelligence based screening tool for detection of retinopathy of prematurity in a South Indian population. Front Pediatr 2023; 11:1197237. [PMID: 37794964 PMCID: PMC10545957 DOI: 10.3389/fped.2023.1197237] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 08/29/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose The primary objective of this study was to develop and validate an AI algorithm as a screening tool for the detection of retinopathy of prematurity (ROP). Participants Images were collected from infants enrolled in the KIDROP tele-ROP screening program. Methods We developed a deep learning (DL) algorithm with 227,326 wide-field images from multiple camera systems obtained from the KIDROP tele-ROP screening program in India over an 11-year period. 37,477 temporal retina images were utilized with the dataset split into train (n = 25,982, 69.33%), validation (n = 4,006, 10.69%), and an independent test set (n = 7,489, 19.98%). The algorithm consists of a binary classifier that distinguishes between the presence of ROP (Stages 1-3) and the absence of ROP. The image labels were retrieved from the daily registers of the tele-ROP program. They consist of per-eye diagnoses provided by trained ROP graders based on all images captured during the screening session. Infants requiring treatment and a proportion of those not requiring urgent referral had an additional confirmatory diagnosis from an ROP specialist. Results Of the 7,489 temporal images analyzed in the test set, 2,249 (30.0%) images showed the presence of ROP. The sensitivity and specificity to detect ROP was 91.46% (95% CI: 90.23%-92.59%) and 91.22% (95% CI: 90.42%-91.97%), respectively, while the positive predictive value (PPV) was 81.72% (95% CI: 80.37%-83.00%), negative predictive value (NPV) was 96.14% (95% CI: 95.60%-96.61%) and the AUROC was 0.970. Conclusion The novel ROP screening algorithm demonstrated high sensitivity and specificity in detecting the presence of ROP. A prospective clinical validation in a real-world tele-ROP platform is under consideration. It has the potential to lower the number of screening sessions required to be conducted by a specialist for a high-risk preterm infant thus significantly improving workflow efficiency.
Collapse
Affiliation(s)
- Divya Parthasarathy Rao
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Inc., Glen Allen, VA, United States
| | - Florian M. Savoy
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Joshua Zhi En Tan
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Brian Pei-En Fung
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Chiran Mandula Bopitiya
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Anand Sivaraman
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, India
| | - Anand Vinekar
- Department of Pediatric Retina, Narayana Nethralaya Eye Institute, Bangalore, India
| |
Collapse
|
5
|
Xu CL, Adu-Brimpong J, Moshfeghi HP, Rosenblatt TR, Yu MD, Ji MH, Wang SK, Zaidi M, Ghoraba H, Michalak S, Callaway NF, Kumm J, Nudleman E, Wood EH, Patel NA, Stahl A, Lepore D, Moshfeghi DM. Telemedicine retinopathy of prematurity severity score (TeleROP-SS) versus modified activity score (mROP-ActS) retrospective comparison in SUNDROP cohort. Sci Rep 2023; 13:15219. [PMID: 37709791 PMCID: PMC10502047 DOI: 10.1038/s41598-023-42150-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 09/06/2023] [Indexed: 09/16/2023] Open
Abstract
Identifying and planning treatment for retinopathy of prematurity (ROP) using telemedicine is becoming increasingly ubiquitous, necessitating a grading system to help caretakers of at-risk infants gauge disease severity. The modified ROP Activity Scale (mROP-ActS) factors zone, stage, and plus disease into its scoring system, addressing the need for assessing ROP's totality of binocular burden via indirect ophthalmoscopy. However, there is an unmet need for an alternative score which could facilitate ROP identification and gauge disease improvement or deterioration specifically on photographic telemedicine exams. Here, we propose such a system (Telemedicine ROP Severity Score [TeleROP-SS]), which we have compared against the mROP-ActS. In our statistical analysis of 1568 exams, we saw that TeleROP-SS was able to return a score in all instances based on the gradings available from the retrospective SUNDROP cohort, while mROP-ActS obtained a score of 80.8% in right eyes and 81.1% in left eyes. For treatment-warranted ROP (TW-ROP), TeleROP-SS obtained a score of 100% and 95% in the right and left eyes respectively, while mROP-ActS obtained a score of 70% and 63% respectively. The TeleROP-SS score can identify disease improvement or deterioration on telemedicine exams, distinguish timepoints at which treatments can be given, and it has the adaptability to be modified as needed.
Collapse
Affiliation(s)
- Christine L Xu
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Joel Adu-Brimpong
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | | | - Tatiana R Rosenblatt
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Michael D Yu
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Marco H Ji
- Department of Ophthalmology, Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Sean K Wang
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Moosa Zaidi
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Hashem Ghoraba
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Suzanne Michalak
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Natalia F Callaway
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Jochen Kumm
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA
| | - Eric Nudleman
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | | | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Andreas Stahl
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Domenico Lepore
- Department of Geriatrics and Neuroscience, Catholic University of the Sacred Heart, A. Gemelli Foundation IRCSS, Rome, Italy
| | - Darius M Moshfeghi
- Department of Ophthalmology, Horngren Family Vitreoretinal Center, Byers Eye Institute, Stanford University School of Medicine, 2452 Watson Ct., Rm 2277, Palo Alto, CA, 94303, USA.
| |
Collapse
|
6
|
Ramanathan A, Athikarisamy SE, Lam GC. Artificial intelligence for the diagnosis of retinopathy of prematurity: A systematic review of current algorithms. Eye (Lond) 2023; 37:2518-2526. [PMID: 36577806 PMCID: PMC10397194 DOI: 10.1038/s41433-022-02366-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/23/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/OBJECTIVES With the increasing survival of premature infants, there is an increased demand to provide adequate retinopathy of prematurity (ROP) services. Wide field retinal imaging (WFDRI) and artificial intelligence (AI) have shown promise in the field of ROP and have the potential to improve the diagnostic performance and reduce the workload for screening ophthalmologists. The aim of this review is to systematically review and provide a summary of the diagnostic characteristics of existing deep learning algorithms. SUBJECT/METHODS Two authors independently searched the literature, and studies using a deep learning system from retinal imaging were included. Data were extracted, assessed and reported using PRISMA guidelines. RESULTS Twenty-seven studies were included in this review. Nineteen studies used AI systems to diagnose ROP, classify the staging of ROP, diagnose the presence of pre-plus or plus disease, or assess the quality of retinal images. The included studies reported a sensitivity of 71%-100%, specificity of 74-99% and area under the curve of 91-99% for the primary outcome of the study. AI techniques were comparable to the assessment of ophthalmologists in terms of overall accuracy and sensitivity. Eight studies evaluated vascular severity scores and were able to accurately differentiate severity using an automated classification score. CONCLUSION Artificial intelligence for ROP diagnosis is a growing field, and many potential utilities have already been identified, including the presence of plus disease, staging of disease and a new automated severity score. AI has a role as an adjunct to clinical assessment; however, there is insufficient evidence to support its use as a sole diagnostic tool currently.
Collapse
Affiliation(s)
- Ashwin Ramanathan
- Department of Paediatrics, Perth Children's Hospital, Perth, Australia
| | - Sam Ebenezer Athikarisamy
- Department of Neonatology, Perth Children's Hospital, Perth, Australia.
- School of Medicine, University of Western Australia, Crawley, Australia.
| | - Geoffrey C Lam
- Department of Ophthalmology, Perth Children's Hospital, Perth, Australia
- Centre for Ophthalmology and Visual Science, University of Western Australia, Crawley, Australia
| |
Collapse
|
7
|
Cole E, Valikodath NG, Al-Khaled T, Bajimaya S, KC S, Chuluunbat T, Munkhuu B, Jonas KE, Chuluunkhuu C, MacKeen LD, Yap V, Hallak J, Ostmo S, Wu WC, Coyner AS, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP, Chan RVP. Evaluation of an Artificial Intelligence System for Retinopathy of Prematurity Screening in Nepal and Mongolia. OPHTHALMOLOGY SCIENCE 2022; 2:100165. [PMID: 36531583 PMCID: PMC9754980 DOI: 10.1016/j.xops.2022.100165] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/19/2022] [Accepted: 04/19/2022] [Indexed: 05/09/2023]
Abstract
PURPOSE To evaluate the performance of a deep learning (DL) algorithm for retinopathy of prematurity (ROP) screening in Nepal and Mongolia. DESIGN Retrospective analysis of prospectively collected clinical data. PARTICIPANTS Clinical information and fundus images were obtained from infants in 2 ROP screening programs in Nepal and Mongolia. METHODS Fundus images were obtained using the Forus 3nethra neo (Forus Health) in Nepal and the RetCam Portable (Natus Medical, Inc.) in Mongolia. The overall severity of ROP was determined from the medical record using the International Classification of ROP (ICROP). The presence of plus disease was determined independently in each image using a reference standard diagnosis. The Imaging and Informatics for ROP (i-ROP) DL algorithm was trained on images from the RetCam to classify plus disease and to assign a vascular severity score (VSS) from 1 through 9. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve and area under the precision-recall curve for the presence of plus disease or type 1 ROP and association between VSS and ICROP disease category. RESULTS The prevalence of type 1 ROP was found to be higher in Mongolia (14.0%) than in Nepal (2.2%; P < 0.001) in these data sets. In Mongolia (RetCam images), the area under the receiver operating characteristic curve for examination-level plus disease detection was 0.968, and the area under the precision-recall curve was 0.823. In Nepal (Forus images), these values were 0.999 and 0.993, respectively. The ROP VSS was associated with ICROP classification in both datasets (P < 0.001). At the population level, the median VSS was found to be higher in Mongolia (2.7; interquartile range [IQR], 1.3-5.4]) as compared with Nepal (1.9; IQR, 1.2-3.4; P < 0.001). CONCLUSIONS These data provide preliminary evidence of the effectiveness of the i-ROP DL algorithm for ROP screening in neonatal populations in Nepal and Mongolia using multiple camera systems and are useful for consideration in future clinical implementation of artificial intelligence-based ROP screening in low- and middle-income countries.
Collapse
Key Words
- Artificial intelligence
- BW, birth weight
- DL, deep learning
- Deep learning
- GA, gestational age
- ICROP, International Classification of Retinopathy of Prematurity
- IQR, interquartile range
- LMIC, low- and middle-income country
- Mongolia
- Nepal
- ROP, retinopathy of prematurity
- RSD, reference standard diagnosis
- Retinopathy of prematurity
- TR, treatment-requiring
- VSS, vascular severity score
- i-ROP, Imaging and Informatics for Retinopathy of Prematurity
Collapse
Affiliation(s)
- Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Nita G. Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Sagun KC
- Helen Keller International, Kathmandu, Nepal
| | | | - Bayalag Munkhuu
- National Center for Maternal and Child Health, Ulaanbaatar, Mongolia
| | - Karyn E. Jonas
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Leslie D. MacKeen
- The Hospital for Sick Children, Toronto, Canada
- Phoenix Technology Group, Pleasanton, California
| | - Vivien Yap
- Department of Pediatrics, Weill Cornell Medical College, New York, New York
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Wei-Chi Wu
- Chang Gung Memorial Hospital, Taoyuan, Taiwan, and Chang Gung University, College of Medicine, Taoyuan, Taiwan
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | | | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Correspondence: R. V. Paul Chan, MD, MSc, MBA, Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, 1905 West Taylor Street, Chicago, IL 60612.
| |
Collapse
|
8
|
Campbell JP, Chiang MF, Chen JS, Moshfeghi DM, Nudleman E, Ruambivoonsuk P, Cherwek H, Cheung CY, Singh P, Kalpathy-Cramer J, Ostmo S, Eydelman M, Chan RP, Capone A. Artificial Intelligence for Retinopathy of Prematurity: Validation of a Vascular Severity Scale against International Expert Diagnosis. Ophthalmology 2022; 129:e69-e76. [PMID: 35157950 PMCID: PMC9232863 DOI: 10.1016/j.ophtha.2022.02.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 01/07/2023] Open
Abstract
PURPOSE To validate a vascular severity score as an appropriate output for artificial intelligence (AI) Software as a Medical Device (SaMD) for retinopathy of prematurity (ROP) through comparison with ordinal disease severity labels for stage and plus disease assigned by the International Classification of Retinopathy of Prematurity, Third Edition (ICROP3), committee. DESIGN Validation study of an AI-based ROP vascular severity score. PARTICIPANTS A total of 34 ROP experts from the ICROP3 committee. METHODS Two separate datasets of 30 fundus photographs each for stage (0-5) and plus disease (plus, preplus, neither) were labeled by members of the ICROP3 committee using an open-source platform. Averaging these results produced a continuous label for plus (1-9) and stage (1-3) for each image. Experts were also asked to compare each image to each other in terms of relative severity for plus disease. Each image was also labeled with a vascular severity score from the Imaging and Informatics in ROP deep learning system, which was compared with each grader's diagnostic labels for correlation, as well as the ophthalmoscopic diagnosis of stage. MAIN OUTCOME MEASURES Weighted kappa and Pearson correlation coefficients (CCs) were calculated between each pair of grader classification labels for stage and plus disease. The Elo algorithm was also used to convert pairwise comparisons for each expert into an ordered set of images from least to most severe. RESULTS The mean weighted kappa and CC for all interobserver pairs for plus disease image comparison were 0.67 and 0.88, respectively. The vascular severity score was found to be highly correlated with both the average plus disease classification (CC = 0.90, P < 0.001) and the ophthalmoscopic diagnosis of stage (P < 0.001 by analysis of variance) among all experts. CONCLUSIONS The ROP vascular severity score correlates well with the International Classification of Retinopathy of Prematurity committee member's labels for plus disease and stage, which had significant intergrader variability. Generation of a consensus for a validated scoring system for ROP SaMD can facilitate global innovation and regulatory authorization of these technologies.
Collapse
Affiliation(s)
- J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | | | - Jimmy S. Chen
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center,Department of Ophthalmology, Stanford University, Palo Alto, CA
| | - Eric Nudleman
- Department of Ophthalmology, University of California, San Diego
| | | | | | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong
| | - Praveer Singh
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Susan Ostmo
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Malvina Eydelman
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Antonio Capone
- Associated Retinal Consultants, Oakland University William Beaumont School of Medicine, Royal Oak, Michigan, USA
| | | | | |
Collapse
|
9
|
Bai A, Carty C, Dai S. Performance of deep-learning artificial intelligence algorithms in detecting retinopathy of prematurity: A systematic review. SAUDI JOURNAL OF OPHTHALMOLOGY : OFFICIAL JOURNAL OF THE SAUDI OPHTHALMOLOGICAL SOCIETY 2022; 36:296-307. [PMID: 36276252 DOI: 10.4103/sjopt.sjopt_219_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/09/2021] [Accepted: 11/12/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE Artificial intelligence (AI) offers considerable promise for retinopathy of prematurity (ROP) screening and diagnosis. The development of deep-learning algorithms to detect the presence of disease may contribute to sufficient screening, early detection, and timely treatment for this preventable blinding disease. This review aimed to systematically examine the literature in AI algorithms in detecting ROP. Specifically, we focused on the performance of deep-learning algorithms through sensitivity, specificity, and area under the receiver operating curve (AUROC) for both the detection and grade of ROP. METHODS We searched Medline OVID, PubMed, Web of Science, and Embase for studies published from January 1, 2012, to September 20, 2021. Studies evaluating the diagnostic performance of deep-learning models based on retinal fundus images with expert ophthalmologists' judgment as reference standard were included. Studies which did not investigate the presence or absence of disease were excluded. Risk of bias was assessed using the QUADAS-2 tool. RESULTS Twelve studies out of the 175 studies identified were included. Five studies measured the performance of detecting the presence of ROP and seven studies determined the presence of plus disease. The average AUROC out of 11 studies was 0.98. The average sensitivity and specificity for detecting ROP was 95.72% and 98.15%, respectively, and for detecting plus disease was 91.13% and 95.92%, respectively. CONCLUSION The diagnostic performance of deep-learning algorithms in published studies was high. Few studies presented externally validated results or compared performance to expert human graders. Large scale prospective validation alongside robust study design could improve future studies.
Collapse
Affiliation(s)
- Amelia Bai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,Centre for Children's Health Research, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia
| | - Christopher Carty
- Griffith Centre of Biomedical and Rehabilitation Engineering (GCORE), Menzies Health Institute Queensland, Griffith University Gold Coast, Australia.,Department of Orthopaedics, Children's Health Queensland Hospital and Health Service, Queensland Children's Hospital, Brisbane, Australia
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia.,University of Queensland, Australia
| |
Collapse
|
10
|
Federated learning for multi-center collaboration in ophthalmology: implications for clinical diagnosis and disease epidemiology. Ophthalmol Retina 2022; 6:650-656. [PMID: 35304305 PMCID: PMC9357070 DOI: 10.1016/j.oret.2022.03.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/10/2022] [Accepted: 03/04/2022] [Indexed: 11/24/2022]
Abstract
OBJECTIVE OR PURPOSE To utilize a deep learning (DL) model trained via federated learning (FL), a method of collaborative training without sharing patient data, to delineate institutional differences in clinician diagnostic paradigms and disease epidemiology in retinopathy of prematurity (ROP). DESIGN Evaluation of a diagnostic test or technology SUBJECTS, PARTICIPANTS, AND/OR CONTROLS: 5,245 patients with wide-angle retinal imaging from the neonatal intensive care units of 7 institutions as part of the Imaging and Informatics in ROP (i-ROP) study. Images were labeled with the clinical diagnosis of plus disease (plus, pre-plus, no plus) that was documented in the chart, and a reference standard diagnosis (RSD) determined by three image-based ROP graders and the clinical diagnosis. METHODS, INTERVENTION OR TESTING Demographics (birthweight [BW], gestational age [GA]), and clinical diagnoses for all eye exams were recorded from each institution. Using a FL approach, a DL model for plus disease classification was trained using only the clinical labels. The three class probabilities were then converted into a vascular severity score (VSS) for each eye exam, as well as an "institutional VSS" in which the average of the VSS values assigned to patients' higher severity ("worse") eyes at each exam was calculated for each institution. MAIN OUTCOME MEASURES We compared demographics, clinical diagnosis of plus disease, and institutional VSS between institutions using the McNemar Bowker test, two-proportion Z test and one-way ANOVA with post-hoc analysis by Tukey-Kramer test. Single regression analysis was performed to explore the relationship between demographics and VSS. RESULTS We found that the proportion of patients diagnosed with pre-plus disease varied significantly between institutions (p<0.00l). Using the DL-derived VSS trained on the data from all institutions using FL, we observed differences in the institutional VSS, as well as level of vascular severity diagnosed as no plus (p<0.001) across institutions. A significant, inverse relationship between the institutional VSS and the mean GA was found (p=0.049, adjusted R2=0.49). CONCLUSIONS A DL-derived ROP VSS developed without sharing data between institutions using FL identified differences in the clinical diagnosis of plus disease, and overall levels of ROP severity between institutions. FL may represent a method to standardize clinical diagnosis and provide objective measurement of disease for image-based diseases.
Collapse
|
11
|
Fadakar K, Mehrabi Bahar M, Riazi-Esfahani H, Azarkish A, Farahani AD, Heidari M, Bazvand F. Intravitreal bevacizumab to treat retinopathy of prematurity in 865 eyes: a study to determine predictors of primary treatment failure and recurrence. Int Ophthalmol 2022; 42:2017-2028. [DOI: 10.1007/s10792-021-02198-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/18/2021] [Indexed: 10/19/2022]
|
12
|
Repka MX. A Revision of the International Classification of Retinopathy of Prematurity. Ophthalmology 2021; 128:1381-1383. [PMID: 34332760 DOI: 10.1016/j.ophtha.2021.07.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/09/2021] [Accepted: 07/09/2021] [Indexed: 11/20/2022] Open
|
13
|
Campbell JP, Singh P, Redd TK, Brown JM, Shah PK, Subramanian P, Rajan R, Valikodath N, Cole E, Ostmo S, Chan RVP, Venkatapathy N, Chiang MF, Kalpathy-Cramer J. Applications of Artificial Intelligence for Retinopathy of Prematurity Screening. Pediatrics 2021; 147:e2020016618. [PMID: 33637645 PMCID: PMC7924138 DOI: 10.1542/peds.2020-016618] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/29/2020] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability. METHODS External validation study of an existing AI-based quantitative severity scale for ROP on a data set of images from the Retinopathy of Prematurity Eradication Save Our Sight ROP telemedicine program in India. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors. RESULTS The area under the receiver operating characteristic curve for detection of treatment-requiring retinopathy of prematurity was 0.98, with 100% sensitivity and 78% specificity. We found higher median (interquartile range) ROP severity in NCUs without oxygen blenders and pulse oxygenation monitors, most apparent in bigger infants (>1500 g and 31 weeks' gestation: 2.7 [2.5-3.0] vs 3.1 [2.4-3.8]; P = .007, with adjustment for birth weight and gestational age). CONCLUSIONS Integration of AI into ROP screening programs may lead to improved access to care for secondary prevention of ROP and may facilitate assessment of disease epidemiology and NCU resources.
Collapse
Affiliation(s)
- J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute and
- Contributed equally as co-first authors
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
- Contributed equally as co-first authors
| | - Travis K Redd
- Department of Ophthalmology, Casey Eye Institute and
| | - James M Brown
- Department of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Parag K Shah
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Renu Rajan
- Department of Retina and Vitreous, Aravind Eye Hospital, Madurai, India; and
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute and
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | | | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute and
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| |
Collapse
|
14
|
Blair MP, Rodriguez SH, Shapiro MJ. A Potential Solution to Plus Disease Variability. Ophthalmol Retina 2020; 4:1022-1023. [PMID: 33019986 DOI: 10.1016/j.oret.2020.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 06/11/2020] [Indexed: 06/11/2023]
Affiliation(s)
- Michael P Blair
- Retina Consultants, Ltd, Des Plaines, Illinois; Department of Ophthalmology, University of Chicago, Chicago, Illinois.
| | | | | |
Collapse
|