1
|
Kıran Yenice E, Kara C, Erdaş ÇB. Automated detection of type 1 ROP, type 2 ROP and A-ROP based on deep learning. Eye (Lond) 2024; 38:2644-2648. [PMID: 38918566 PMCID: PMC11385231 DOI: 10.1038/s41433-024-03184-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024] Open
Abstract
PURPOSE To provide automatic detection of Type 1 retinopathy of prematurity (ROP), Type 2 ROP, and A-ROP by deep learning-based analysis of fundus images obtained by clinical examination using convolutional neural networks. MATERIAL AND METHODS A total of 634 fundus images of 317 premature infants born at 23-34 weeks of gestation were evaluated. After image pre-processing, we obtained a rectangular region (ROI). RegNetY002 was used for algorithm training, and stratified 10-fold cross-validation was applied during training to evaluate and standardize our model. The model's performance was reported as accuracy and specificity and described by the receiver operating characteristic (ROC) curve and area under the curve (AUC). RESULTS The model achieved 0.98 accuracy and 0.98 specificity in detecting Type 2 ROP versus Type 1 ROP and A-ROP. On the other hand, as a result of the analysis of ROI regions, the model achieved 0.90 accuracy and 0.95 specificity in detecting Stage 2 ROP versus Stage 3 ROP and 0.91 accuracy and 0.92 specificity in detecting A-ROP versus Type 1 ROP. The AUC scores were 0.98 for Type 2 ROP versus Type 1 ROP and A-ROP, 0.85 for Stage 2 ROP versus Stage 3 ROP, and 0.91 for A-ROP versus Type 1 ROP. CONCLUSION Our study demonstrated that ROP classification by DL-based analysis of fundus images can be distinguished with high accuracy and specificity. Integrating DL-based artificial intelligence algorithms into clinical practice may reduce the workload of ophthalmologists in the future and provide support in decision-making in the management of ROP.
Collapse
Affiliation(s)
- Eşay Kıran Yenice
- Department of Ophthalmology, University of Health Sciences, Etlik Zübeyde Hanım Maternity and Women's Health Teaching and Research Hospital, Ankara, Turkey.
| | - Caner Kara
- Department of Ophthalmology, Etlik City Hospital, Ankara, Turkey
| | | |
Collapse
|
2
|
Wu MN, He K, Yu YB, Zheng B, Zhu SJ, Hong XQ, Xi WQ, Zhang Z. Intelligent diagnostic model for pterygium by combining attention mechanism and MobileNetV2. Int J Ophthalmol 2024; 17:1184-1192. [PMID: 39026919 PMCID: PMC11246929 DOI: 10.18240/ijo.2024.07.02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 04/01/2024] [Indexed: 07/20/2024] Open
Abstract
AIM To evaluate the application of an intelligent diagnostic model for pterygium. METHODS For intelligent diagnosis of pterygium, the attention mechanisms-SENet, ECANet, CBAM, and Self-Attention-were fused with the lightweight MobileNetV2 model structure to construct a tri-classification model. The study used 1220 images of three types of anterior ocular segments of the pterygium provided by the Eye Hospital of Nanjing Medical University. Conventional classification models-VGG16, ResNet50, MobileNetV2, and EfficientNetB7-were trained on the same dataset for comparison. To evaluate model performance in terms of accuracy, Kappa value, test time, sensitivity, specificity, the area under curve (AUC), and visual heat map, 470 test images of the anterior segment of the pterygium were used. RESULTS The accuracy of the MobileNetV2+Self-Attention model with 281 MB in model size was 92.77%, and the Kappa value of the model was 88.92%. The testing time using the model was 9ms/image in the server and 138ms/image in the local computer. The sensitivity, specificity, and AUC for the diagnosis of pterygium using normal anterior segment images were 99.47%, 100%, and 100%, respectively; using anterior segment images in the observation period were 88.30%, 95.32%, and 96.70%, respectively; and using the anterior segment images in the surgery period were 88.18%, 94.44%, and 97.30%, respectively. CONCLUSION The developed model is lightweight and can be used not only for detection but also for assessing the severity of pterygium.
Collapse
Affiliation(s)
- Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- School of Mathematical Information, Shaoxing University, Shaoxing 312000, Zhejiang Province, China
| | - Yi-Bei Yu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Xiang-Qian Hong
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wen-Qun Xi
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Zhe Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
3
|
Coyner AS, Young BK, Ostmo SR, Grigorian F, Ells A, Hubbard B, Rodriguez SH, Rishi P, Miller AM, Bhatt AR, Agarwal-Sinha S, Sears J, Chan RVP, Chiang MF, Kalpathy-Cramer J, Binenbaum G, Campbell JP. Use of an Artificial Intelligence-Generated Vascular Severity Score Improved Plus Disease Diagnosis in Retinopathy of Prematurity. Ophthalmology 2024:S0161-6420(24)00339-7. [PMID: 38866367 DOI: 10.1016/j.ophtha.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/14/2024] Open
Abstract
PURPOSE To evaluate whether providing clinicians with an artificial intelligence (AI)-based vascular severity score (VSS) improves consistency in the diagnosis of plus disease in retinopathy of prematurity (ROP). DESIGN Multireader diagnostic accuracy imaging study. PARTICIPANTS Eleven ROP experts, 9 of whom had been in practice for 10 years or more. METHODS RetCam (Natus Medical Incorporated) fundus images were obtained from premature infants during routine ROP screening as part of the Imaging and Informatics in ROP study between January 2012 and July 2020. From all available examinations, a subset of 150 eye examinations from 110 infants were selected for grading. An AI-based VSS was assigned to each set of images using the i-ROP DL system (Siloam Vision). The clinicians were asked to diagnose plus disease for each examination and to assign an estimated VSS (range, 1-9) at baseline, and then again 1 month later with AI-based VSS assistance. A reference standard diagnosis (RSD) was assigned to each eye examination from the Imaging and Informatics in ROP study based on 3 masked expert labels and the ophthalmoscopic diagnosis. MAIN OUTCOME MEASURES Mean linearly weighted κ value for plus disease diagnosis compared with RSD. Area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPR) for labels 1 through 9 compared with RSD for plus disease. RESULTS Expert agreement improved significantly, from substantial (κ value, 0.69 [0.59, 0.75]) to near perfect (κ value, 0.81 [0.71, 0.86]), when AI-based VSS was integrated. Additionally, a significant improvement in plus disease discrimination was achieved as measured by mean AUC (from 0.94 [95% confidence interval (CI), 0.92-0.96] to 0.98 [95% CI, 0.96-0.99]; difference, 0.04 [95% CI, 0.01-0.06]) and AUPR (from 0.86 [95% CI, 0.81-0.90] to 0.95 [95% CI, 0.91-0.97]; difference, 0.09 [95% CI, 0.03-0.14]). CONCLUSIONS Providing ROP clinicians with an AI-based measurement of vascular severity in ROP was associated with both improved plus disease diagnosis and improved continuous severity labeling as compared with an RSD for plus disease. If implemented in practice, AI-based VSS could reduce interobserver variability and could standardize treatment for infants with ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Aaron S Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Benjamin K Young
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Susan R Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Florin Grigorian
- Arkansas Children's Hospital, University of Arkansas for Medical Sciences, Little Rock, Arkansas
| | - Anna Ells
- Calgary Retina Consultants, University of Calgary, Calgary, Alberta, Canada
| | - Baker Hubbard
- Emory Eye Center, Emory University School of Medicine, Atlanta, Georgia
| | - Sarah H Rodriguez
- Department of Ophthalmology and Visual Science, University of Chicago, Chicago, Illinois
| | - Pukhraj Rishi
- Truhlsen Eye Institute, University of Nebraska Medical Centre, Omaha, Nebraska
| | - Aaron M Miller
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas
| | - Amit R Bhatt
- Department of Ophthalmology, Texas Children's Hospital, Houston, Texas
| | | | - Jonathan Sears
- Cole Eye Institute, The Cleveland Clinic, Cleveland, Ohio
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- National Eye Institute, National Institutes of Health, Bethesda, Maryland; Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Gil Binenbaum
- Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
4
|
Coyner AS, Murickan T, Oh MA, Young BK, Ostmo SR, Singh P, Chan RVP, Moshfeghi DM, Shah PK, Venkatapathy N, Chiang MF, Kalpathy-Cramer J, Campbell JP. Multinational External Validation of Autonomous Retinopathy of Prematurity Screening. JAMA Ophthalmol 2024; 142:327-335. [PMID: 38451496 PMCID: PMC10921347 DOI: 10.1001/jamaophthalmol.2024.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/15/2023] [Indexed: 03/08/2024]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening. Objective To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP. Design, Setting, and Participants This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023. Exposures An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine. Main Outcomes and Measures The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels. Results The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis. Conclusions and Relevance Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Tom Murickan
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Susan R. Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Ophthalmology, University of Colorado School of Medicine, Aurora
| | - R. V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Darius M. Moshfeghi
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | | | | |
Collapse
|
5
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
6
|
Li L, Lin D, Lin Z, Li M, Lian Z, Zhao L, Wu X, Liu L, Liu J, Wei X, Luo M, Zeng D, Yan A, Iao WC, Shang Y, Xu F, Xiang W, He M, Fu Z, Wang X, Deng Y, Fan X, Ye Z, Wei M, Zhang J, Liu B, Li J, Ding X, Lin H. DeepQuality improves infant retinopathy screening. NPJ Digit Med 2023; 6:192. [PMID: 37845275 PMCID: PMC10579317 DOI: 10.1038/s41746-023-00943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 10/05/2023] [Indexed: 10/18/2023] Open
Abstract
Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.
Collapse
Affiliation(s)
- Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhangkai Lian
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyue Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingjie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Anqi Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wei Xiang
- Department of Clinical Laboratory Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Muchen He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xueyu Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yaru Deng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinyan Fan
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhijun Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meirong Wei
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Jianping Zhang
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Baohai Liu
- Department of Ophthalmology, Maternal and Children's Hospital, Linyi, Shandong, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
7
|
Ramanathan A, Athikarisamy SE, Lam GC. Artificial intelligence for the diagnosis of retinopathy of prematurity: A systematic review of current algorithms. Eye (Lond) 2023; 37:2518-2526. [PMID: 36577806 PMCID: PMC10397194 DOI: 10.1038/s41433-022-02366-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/23/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/OBJECTIVES With the increasing survival of premature infants, there is an increased demand to provide adequate retinopathy of prematurity (ROP) services. Wide field retinal imaging (WFDRI) and artificial intelligence (AI) have shown promise in the field of ROP and have the potential to improve the diagnostic performance and reduce the workload for screening ophthalmologists. The aim of this review is to systematically review and provide a summary of the diagnostic characteristics of existing deep learning algorithms. SUBJECT/METHODS Two authors independently searched the literature, and studies using a deep learning system from retinal imaging were included. Data were extracted, assessed and reported using PRISMA guidelines. RESULTS Twenty-seven studies were included in this review. Nineteen studies used AI systems to diagnose ROP, classify the staging of ROP, diagnose the presence of pre-plus or plus disease, or assess the quality of retinal images. The included studies reported a sensitivity of 71%-100%, specificity of 74-99% and area under the curve of 91-99% for the primary outcome of the study. AI techniques were comparable to the assessment of ophthalmologists in terms of overall accuracy and sensitivity. Eight studies evaluated vascular severity scores and were able to accurately differentiate severity using an automated classification score. CONCLUSION Artificial intelligence for ROP diagnosis is a growing field, and many potential utilities have already been identified, including the presence of plus disease, staging of disease and a new automated severity score. AI has a role as an adjunct to clinical assessment; however, there is insufficient evidence to support its use as a sole diagnostic tool currently.
Collapse
Affiliation(s)
- Ashwin Ramanathan
- Department of Paediatrics, Perth Children's Hospital, Perth, Australia
| | - Sam Ebenezer Athikarisamy
- Department of Neonatology, Perth Children's Hospital, Perth, Australia.
- School of Medicine, University of Western Australia, Crawley, Australia.
| | - Geoffrey C Lam
- Department of Ophthalmology, Perth Children's Hospital, Perth, Australia
- Centre for Ophthalmology and Visual Science, University of Western Australia, Crawley, Australia
| |
Collapse
|
8
|
Shah S, Slaney E, VerHage E, Chen J, Dias R, Abdelmalik B, Weaver A, Neu J. Application of Artificial Intelligence in the Early Detection of Retinopathy of Prematurity: Review of the Literature. Neonatology 2023; 120:558-565. [PMID: 37490881 DOI: 10.1159/000531441] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/30/2023] [Indexed: 07/27/2023]
Abstract
Retinopathy of prematurity (ROP) is a potentially blinding disease in premature neonates that requires a skilled workforce for diagnosis, monitoring, and treatment. Artificial intelligence is a valuable tool that clinicians employ to reduce the screening burden on ophthalmologists and neonatologists and improve the detection of treatment-requiring ROP. Neural networks such as convolutional neural networks and deep learning (DL) systems are used to calculate a vascular severity score (VSS), an important component of various risk models. These DL systems have been validated in various studies, which are reviewed here. Most importantly, we discuss a promising study that validated a DL system that could predict the development of ROP despite a lack of clinical evidence of disease on the first retinal examination. Additionally, there is promise in utilizing these systems through telemedicine in more rural and resource-limited areas. This review highlights the value of these DL systems in early ROP diagnosis.
Collapse
Affiliation(s)
- Shivani Shah
- College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Elizabeth Slaney
- College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Erik VerHage
- Department of Pediatrics, University of Florida, Gainesville, Florida, USA
| | - Jinghua Chen
- Department of Ophthalmology, University of Florida, Gainesville, Florida, USA
| | - Raquel Dias
- Department of Microbiology and Cell Science, University of Florida, Gainesville, Florida, USA
| | - Bishoy Abdelmalik
- College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Alex Weaver
- College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Josef Neu
- Department of Pediatrics, University of Florida, Gainesville, Florida, USA
| |
Collapse
|
9
|
Mellak Y, Achim A, Ward A, Nicholson L, Descombes X. A machine learning framework for the quantification of experimental uveitis in murine OCT. BIOMEDICAL OPTICS EXPRESS 2023; 14:3413-3432. [PMID: 37497491 PMCID: PMC10368067 DOI: 10.1364/boe.489271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/11/2023] [Accepted: 05/22/2023] [Indexed: 07/28/2023]
Abstract
This paper presents methods for the detection and assessment of non-infectious uveitis, a leading cause of vision loss in working age adults. In the first part, we propose a classification model that can accurately predict the presence of uveitis and differentiate between different stages of the disease using optical coherence tomography (OCT) images. We utilize the Grad-CAM visualization technique to elucidate the decision-making process of the classifier and gain deeper insights into the results obtained. In the second part, we apply and compare three methods for the detection of detached particles in the retina that are indicative of uveitis. The first is a fully supervised detection method, the second is a marked point process (MPP) technique, and the third is a weakly supervised segmentation that produces per-pixel masks as output. The segmentation model is used as a backbone for a fully automated pipeline that can segment small particles of uveitis in two-dimensional (2-D) slices of the retina, reconstruct the volume, and produce centroids as points distribution in space. The number of particles in retinas is used to grade the disease, and point process analysis on centroids in three-dimensional (3-D) shows clustering patterns in the distribution of the particles on the retina.
Collapse
Affiliation(s)
- Youness Mellak
- Université Côte d’Azur, INRIA, CNRS, I3S, Sophia Antipolis, France
| | - Alin Achim
- University of Bristol, Bristol, United Kingdom
| | - Amy Ward
- University of Bristol, Bristol, United Kingdom
| | | | - Xavier Descombes
- Université Côte d’Azur, INRIA, CNRS, I3S, Sophia Antipolis, France
| |
Collapse
|
10
|
Coyner AS, Singh P, Brown JM, Ostmo S, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP. Association of Biomarker-Based Artificial Intelligence With Risk of Racial Bias in Retinal Images. JAMA Ophthalmol 2023; 141:543-552. [PMID: 37140902 PMCID: PMC10160994 DOI: 10.1001/jamaophthalmol.2023.1310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/01/2023] [Indexed: 05/05/2023]
Abstract
Importance Although race is a social construct, it is associated with variations in skin and retinal pigmentation. Image-based medical artificial intelligence (AI) algorithms that use images of these organs have the potential to learn features associated with self-reported race (SRR), which increases the risk of racially biased performance in diagnostic tasks; understanding whether this information can be removed, without affecting the performance of AI algorithms, is critical in reducing the risk of racial bias in medical AI. Objective To evaluate whether converting color fundus photographs to retinal vessel maps (RVMs) of infants screened for retinopathy of prematurity (ROP) removes the risk for racial bias. Design, Setting, and Participants The retinal fundus images (RFIs) of neonates with parent-reported Black or White race were collected for this study. A u-net, a convolutional neural network (CNN) that provides precise segmentation for biomedical images, was used to segment the major arteries and veins in RFIs into grayscale RVMs, which were subsequently thresholded, binarized, and/or skeletonized. CNNs were trained with patients' SRR labels on color RFIs, raw RVMs, and thresholded, binarized, or skeletonized RVMs. Study data were analyzed from July 1 to September 28, 2021. Main Outcomes and Measures Area under the precision-recall curve (AUC-PR) and area under the receiver operating characteristic curve (AUROC) at both the image and eye level for classification of SRR. Results A total of 4095 RFIs were collected from 245 neonates with parent-reported Black (94 [38.4%]; mean [SD] age, 27.2 [2.3] weeks; 55 majority sex [58.5%]) or White (151 [61.6%]; mean [SD] age, 27.6 [2.3] weeks, 80 majority sex [53.0%]) race. CNNs inferred SRR from RFIs nearly perfectly (image-level AUC-PR, 0.999; 95% CI, 0.999-1.000; infant-level AUC-PR, 1.000; 95% CI, 0.999-1.000). Raw RVMs were nearly as informative as color RFIs (image-level AUC-PR, 0.938; 95% CI, 0.926-0.950; infant-level AUC-PR, 0.995; 95% CI, 0.992-0.998). Ultimately, CNNs were able to learn whether RFIs or RVMs were from Black or White infants regardless of whether images contained color, vessel segmentation brightness differences were nullified, or vessel segmentation widths were uniform. Conclusions and Relevance Results of this diagnostic study suggest that it can be very challenging to remove information relevant to SRR from fundus photographs. As a result, AI algorithms trained on fundus photographs have the potential for biased performance in practice, even if based on biomarkers rather than raw images. Regardless of the methodology used for training AI, evaluating performance in relevant subpopulations is critical.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - James M. Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - R.V. Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts
- MGH & BWH Center for Clinical Data Science, Boston, Massachusetts
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
11
|
Jayanna S, Padhi TR, Nedhina EK, Agarwal K, Jalali S. Color fundus imaging in retinopathy of prematurity screening: Present and future. Indian J Ophthalmol 2023; 71:1777-1782. [PMID: 37203030 PMCID: PMC10391467 DOI: 10.4103/ijo.ijo_2913_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023] Open
Abstract
Advent of pediatric handheld fundus cameras like RetCam, 3netra Forus, and Phoenix ICON pediatric retinal camera has aided in effective screening of retinopathy of prematurity (ROP), especially in countries with limited number of trained specialists. Recent advent of various smartphone-based cameras has made pediatric fundus photography furthermore affordable and portable. Future advances like ultra-wide field fundus cameras, trans-pars-planar illumination pediatric fundus camera, artificial intelligence, deep learning algorithm, and handheld SS-OCTA can help in more accurate imaging and documentation. This article summarizes the features of existing and upcoming imaging modalities in detail, including their features, advantages, challenges, and effectiveness, which can help in implementation of telescreening as a standard screening protocol for ROP across developing as well as developed countries.
Collapse
Affiliation(s)
- Sushma Jayanna
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| | - Tapas R Padhi
- Department of Vitreo Retina, Mithun Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneshwar, Odissa, India
| | - E K Nedhina
- Department of Vitreo Retina, Nethra Jyothi Advanced Eye Care, Taliparamba, Kannur, Kerala, India
| | - Komal Agarwal
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| | - Subhadra Jalali
- Srimati Kanuri Santhamma Center for Vitreoretinal Diseases, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana; Newborn Eye Health Alliance (NEHA), L. V. Prasad Eye Institute Network; Child Sight Institute, L. V. Prasad Eye Institute Network, Bhubaneshwar, Odissa, India
| |
Collapse
|
12
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
13
|
Eilts SK, Pfeil JM, Poschkamp B, Krohne TU, Eter N, Barth T, Guthoff R, Lagrèze W, Grundel M, Bründer MC, Busch M, Kalpathy-Cramer J, Chiang MF, Chan RVP, Coyner AS, Ostmo S, Campbell JP, Stahl A. Assessment of Retinopathy of Prematurity Regression and Reactivation Using an Artificial Intelligence-Based Vascular Severity Score. JAMA Netw Open 2023; 6:e2251512. [PMID: 36656578 PMCID: PMC9857423 DOI: 10.1001/jamanetworkopen.2022.51512] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
IMPORTANCE One of the biggest challenges when using anti-vascular endothelial growth factor (VEGF) agents to treat retinopathy of prematurity (ROP) is the need to perform long-term follow-up examinations to identify eyes at risk of ROP reactivation requiring retreatment. OBJECTIVE To evaluate whether an artificial intelligence (AI)-based vascular severity score (VSS) can be used to analyze ROP regression and reactivation after anti-VEGF treatment and potentially identify eyes at risk of ROP reactivation requiring retreatment. DESIGN, SETTING, AND PARTICIPANTS This prognostic study was a secondary analysis of posterior pole fundus images collected during the multicenter, double-blind, investigator-initiated Comparing Alternative Ranibizumab Dosages for Safety and Efficacy in Retinopathy of Prematurity (CARE-ROP) randomized clinical trial, which compared 2 different doses of ranibizumab (0.12 mg vs 0.20 mg) for the treatment of ROP. The CARE-ROP trial screened and enrolled infants between September 5, 2014, and July 14, 2016. A total of 1046 wide-angle fundus images obtained from 19 infants at predefined study time points were analyzed. The analyses of VSS were performed between January 20, 2021, and November 18, 2022. INTERVENTIONS An AI-based algorithm assigned a VSS between 1 (normal) and 9 (most severe) to fundus images. MAIN OUTCOMES AND MEASURES Analysis of VSS in infants with ROP over time and VSS comparisons between the 2 treatment groups (0.12 mg vs 0.20 mg of ranibizumab) and between infants who did and did not receive retreatment for ROP reactivation. RESULTS Among 19 infants with ROP in the CARE-ROP randomized clinical trial, the median (range) postmenstrual age at first treatment was 36.4 (34.7-39.7) weeks; 10 infants (52.6%) were male, and 18 (94.7%) were White. The mean (SD) VSS was 6.7 (1.9) at baseline and significantly decreased to 2.7 (1.9) at week 1 (P < .001) and 2.9 (1.3) at week 4 (P < .001). The mean (SD) VSS of infants with ROP reactivation requiring retreatment was 6.5 (1.9) at the time of retreatment, which was significantly higher than the VSS at week 4 (P < .001). No significant difference was found in VSS between the 2 treatment groups, but the change in VSS between baseline and week 1 was higher for infants who later required retreatment (mean [SD], 7.8 [1.3] at baseline vs 1.7 [0.7] at week 1) vs infants who did not (mean [SD], 6.4 [1.9] at baseline vs 3.0 [2.0] at week 1). In eyes requiring retreatment, higher baseline VSS was correlated with earlier time of retreatment (Pearson r = -0.9997; P < .001). CONCLUSIONS AND RELEVANCE In this study, VSS decreased after ranibizumab treatment, consistent with clinical disease regression. In cases of ROP reactivation requiring retreatment, VSS increased again to values comparable with baseline values. In addition, a greater change in VSS during the first week after initial treatment was found to be associated with a higher risk of later ROP reactivation, and high baseline VSS was correlated with earlier retreatment. These findings may have implications for monitoring ROP regression and reactivation after anti-VEGF treatment.
Collapse
Affiliation(s)
- Sonja K. Eilts
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Johanna M. Pfeil
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Broder Poschkamp
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Tim U. Krohne
- Department of Ophthalmology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Nicole Eter
- Department of Ophthalmology, University of Muenster Medical Center, Muenster, Germany
| | - Teresa Barth
- Department of Ophthalmology, University of Regensburg, Regensburg, Germany
| | - Rainer Guthoff
- Department of Ophthalmology, Faculty of Medicine, University of Düsseldorf, Düsseldorf, Germany
| | - Wolf Lagrèze
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Milena Grundel
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | | | - Martin Busch
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Jayashree Kalpathy-Cramer
- Center for Clinical Data Science, Massachusetts General Hospital, Brigham and Women’s Hospital, Boston
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois Chicago, Chicago
| | - Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Andreas Stahl
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| |
Collapse
|
14
|
Cole E, Valikodath NG, Al-Khaled T, Bajimaya S, KC S, Chuluunbat T, Munkhuu B, Jonas KE, Chuluunkhuu C, MacKeen LD, Yap V, Hallak J, Ostmo S, Wu WC, Coyner AS, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP, Chan RVP. Evaluation of an Artificial Intelligence System for Retinopathy of Prematurity Screening in Nepal and Mongolia. OPHTHALMOLOGY SCIENCE 2022; 2:100165. [PMID: 36531583 PMCID: PMC9754980 DOI: 10.1016/j.xops.2022.100165] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/19/2022] [Accepted: 04/19/2022] [Indexed: 05/09/2023]
Abstract
PURPOSE To evaluate the performance of a deep learning (DL) algorithm for retinopathy of prematurity (ROP) screening in Nepal and Mongolia. DESIGN Retrospective analysis of prospectively collected clinical data. PARTICIPANTS Clinical information and fundus images were obtained from infants in 2 ROP screening programs in Nepal and Mongolia. METHODS Fundus images were obtained using the Forus 3nethra neo (Forus Health) in Nepal and the RetCam Portable (Natus Medical, Inc.) in Mongolia. The overall severity of ROP was determined from the medical record using the International Classification of ROP (ICROP). The presence of plus disease was determined independently in each image using a reference standard diagnosis. The Imaging and Informatics for ROP (i-ROP) DL algorithm was trained on images from the RetCam to classify plus disease and to assign a vascular severity score (VSS) from 1 through 9. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve and area under the precision-recall curve for the presence of plus disease or type 1 ROP and association between VSS and ICROP disease category. RESULTS The prevalence of type 1 ROP was found to be higher in Mongolia (14.0%) than in Nepal (2.2%; P < 0.001) in these data sets. In Mongolia (RetCam images), the area under the receiver operating characteristic curve for examination-level plus disease detection was 0.968, and the area under the precision-recall curve was 0.823. In Nepal (Forus images), these values were 0.999 and 0.993, respectively. The ROP VSS was associated with ICROP classification in both datasets (P < 0.001). At the population level, the median VSS was found to be higher in Mongolia (2.7; interquartile range [IQR], 1.3-5.4]) as compared with Nepal (1.9; IQR, 1.2-3.4; P < 0.001). CONCLUSIONS These data provide preliminary evidence of the effectiveness of the i-ROP DL algorithm for ROP screening in neonatal populations in Nepal and Mongolia using multiple camera systems and are useful for consideration in future clinical implementation of artificial intelligence-based ROP screening in low- and middle-income countries.
Collapse
Key Words
- Artificial intelligence
- BW, birth weight
- DL, deep learning
- Deep learning
- GA, gestational age
- ICROP, International Classification of Retinopathy of Prematurity
- IQR, interquartile range
- LMIC, low- and middle-income country
- Mongolia
- Nepal
- ROP, retinopathy of prematurity
- RSD, reference standard diagnosis
- Retinopathy of prematurity
- TR, treatment-requiring
- VSS, vascular severity score
- i-ROP, Imaging and Informatics for Retinopathy of Prematurity
Collapse
Affiliation(s)
- Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Nita G. Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Sagun KC
- Helen Keller International, Kathmandu, Nepal
| | | | - Bayalag Munkhuu
- National Center for Maternal and Child Health, Ulaanbaatar, Mongolia
| | - Karyn E. Jonas
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Leslie D. MacKeen
- The Hospital for Sick Children, Toronto, Canada
- Phoenix Technology Group, Pleasanton, California
| | - Vivien Yap
- Department of Pediatrics, Weill Cornell Medical College, New York, New York
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Wei-Chi Wu
- Chang Gung Memorial Hospital, Taoyuan, Taiwan, and Chang Gung University, College of Medicine, Taoyuan, Taiwan
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | | | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Correspondence: R. V. Paul Chan, MD, MSc, MBA, Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, 1905 West Taylor Street, Chicago, IL 60612.
| |
Collapse
|
15
|
Wakabayashi T, Patel SN, Campbell JP, Chang EY, Nudleman ED, Yonekawa Y. Advances in retinopathy of prematurity imaging. Saudi J Ophthalmol 2022; 36:243-250. [PMID: 36276248 PMCID: PMC9583355 DOI: 10.4103/sjopt.sjopt_20_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/06/2022] [Accepted: 02/07/2022] [Indexed: 11/18/2022] Open
Abstract
Retinopathy of prematurity (ROP) remains the leading cause of childhood blindness worldwide. Recent advances in ROP imaging have significantly improved our understanding of the pathogenesis and pathophysiological course of ROP including the acute phase, regression, reactivation, and late complications, known as adult ROP. Recent progress includes various contact and noncontact wide-field imaging devices for fundus imaging, smartphone-based fundus photography, wide-field fluorescein angiography, handheld optical coherence tomography (OCT) devices for wide-field en face OCT images, and OCT angiography. Images taken by those devices were incorporated in the recently updated guidelines of ROP, the International Classification of Retinopathy of Prematurity, Third Edition (ICROP3). ROP imaging has also allowed the real-world adoption of telemedicine- and artificial intelligence (AI)-based screening. Recent study demonstrated proof of concept that AI has a high diagnostic performance for the detection of ROP in a real-world screening. Here, we summarize the recent advances in ROP imaging and their application for screening, diagnosis, and management of ROP.
Collapse
Affiliation(s)
- Taku Wakabayashi
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Samir N. Patel
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - J. P. Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon, USA
| | | | - Eric D. Nudleman
- Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, California, USA
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA,Address for correspondence: Dr. Yoshihiro Yonekawa, Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA. E-mail:
| |
Collapse
|
16
|
Coyner AS, Oh MA, Shah PK, Singh P, Ostmo S, Valikodath NG, Cole E, Al-Khaled T, Bajimaya S, K.C. S, Chuluunbat T, Munkhuu B, Subramanian P, Venkatapathy N, Jonas KE, Hallak JA, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP. External Validation of a Retinopathy of Prematurity Screening Model Using Artificial Intelligence in 3 Low- and Middle-Income Populations. JAMA Ophthalmol 2022; 140:791-798. [PMID: 35797036 PMCID: PMC9264225 DOI: 10.1001/jamaophthalmol.2022.2135] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 04/30/2022] [Indexed: 02/05/2023]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of preventable blindness that disproportionately affects children born in low- and middle-income countries (LMICs). In-person and telemedical screening examinations can reduce this risk but are challenging to implement in LMICs owing to the multitude of at-risk infants and lack of trained ophthalmologists. Objective To implement an ROP risk model using retinal images from a single baseline examination to identify infants who will develop treatment-requiring (TR)-ROP in LMIC telemedicine programs. Design, Setting, and Participants In this diagnostic study conducted from February 1, 2019, to June 30, 2021, retinal fundus images were collected from infants as part of an Indian ROP telemedicine screening program. An artificial intelligence (AI)-derived vascular severity score (VSS) was obtained from images from the first examination after 30 weeks' postmenstrual age. Using 5-fold cross-validation, logistic regression models were trained on 2 variables (gestational age and VSS) for prediction of TR-ROP. The model was externally validated on test data sets from India, Nepal, and Mongolia. Data were analyzed from October 20, 2021, to April 20, 2022. Main Outcomes and Measures Primary outcome measures included sensitivity, specificity, positive predictive value, and negative predictive value for predictions of future occurrences of TR-ROP; the number of weeks before clinical diagnosis when a prediction was made; and the potential reduction in number of examinations required. Results A total of 3760 infants (median [IQR] postmenstrual age, 37 [5] weeks; 1950 male infants [51.9%]) were included in the study. The diagnostic model had a sensitivity and specificity, respectively, for each of the data sets as follows: India, 100.0% (95% CI, 87.2%-100.0%) and 63.3% (95% CI, 59.7%-66.8%); Nepal, 100.0% (95% CI, 54.1%-100.0%) and 77.8% (95% CI, 72.9%-82.2%); and Mongolia, 100.0% (95% CI, 93.3%-100.0%) and 45.8% (95% CI, 39.7%-52.1%). With the AI model, infants with TR-ROP were identified a median (IQR) of 2.0 (0-11) weeks before TR-ROP diagnosis in India, 0.5 (0-2.0) weeks before TR-ROP diagnosis in Nepal, and 0 (0-5.0) weeks before TR-ROP diagnosis in Mongolia. If low-risk infants were never screened again, the population could be effectively screened with 45.0% (India, 664/1476), 38.4% (Nepal, 151/393), and 51.3% (Mongolia, 266/519) fewer examinations required. Conclusions and Relevance Results of this diagnostic study suggest that there were 2 advantages to implementation of this risk model: (1) the number of examinations for low-risk infants could be reduced without missing cases of TR-ROP, and (2) high-risk infants could be identified and closely monitored before development of TR-ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Praveer Singh
- Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
- Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Nita G. Valikodath
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Emily Cole
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Tala Al-Khaled
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | | | - Sagun K.C.
- Helen Keller International, Kathmandu, Nepal
| | | | - Bayalag Munkhuu
- National Center for Maternal and Child Health, Ulaanbaatar, Mongolia
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | | | - Karyn E. Jonas
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Joelle A. Hallak
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - R.V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
- Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
| | | |
Collapse
|
17
|
Campbell JP, Chiang MF, Chen JS, Moshfeghi DM, Nudleman E, Ruambivoonsuk P, Cherwek H, Cheung CY, Singh P, Kalpathy-Cramer J, Ostmo S, Eydelman M, Chan RP, Capone A. Artificial Intelligence for Retinopathy of Prematurity: Validation of a Vascular Severity Scale against International Expert Diagnosis. Ophthalmology 2022; 129:e69-e76. [PMID: 35157950 PMCID: PMC9232863 DOI: 10.1016/j.ophtha.2022.02.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 01/07/2023] Open
Abstract
PURPOSE To validate a vascular severity score as an appropriate output for artificial intelligence (AI) Software as a Medical Device (SaMD) for retinopathy of prematurity (ROP) through comparison with ordinal disease severity labels for stage and plus disease assigned by the International Classification of Retinopathy of Prematurity, Third Edition (ICROP3), committee. DESIGN Validation study of an AI-based ROP vascular severity score. PARTICIPANTS A total of 34 ROP experts from the ICROP3 committee. METHODS Two separate datasets of 30 fundus photographs each for stage (0-5) and plus disease (plus, preplus, neither) were labeled by members of the ICROP3 committee using an open-source platform. Averaging these results produced a continuous label for plus (1-9) and stage (1-3) for each image. Experts were also asked to compare each image to each other in terms of relative severity for plus disease. Each image was also labeled with a vascular severity score from the Imaging and Informatics in ROP deep learning system, which was compared with each grader's diagnostic labels for correlation, as well as the ophthalmoscopic diagnosis of stage. MAIN OUTCOME MEASURES Weighted kappa and Pearson correlation coefficients (CCs) were calculated between each pair of grader classification labels for stage and plus disease. The Elo algorithm was also used to convert pairwise comparisons for each expert into an ordered set of images from least to most severe. RESULTS The mean weighted kappa and CC for all interobserver pairs for plus disease image comparison were 0.67 and 0.88, respectively. The vascular severity score was found to be highly correlated with both the average plus disease classification (CC = 0.90, P < 0.001) and the ophthalmoscopic diagnosis of stage (P < 0.001 by analysis of variance) among all experts. CONCLUSIONS The ROP vascular severity score correlates well with the International Classification of Retinopathy of Prematurity committee member's labels for plus disease and stage, which had significant intergrader variability. Generation of a consensus for a validated scoring system for ROP SaMD can facilitate global innovation and regulatory authorization of these technologies.
Collapse
Affiliation(s)
- J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | | | - Jimmy S. Chen
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center,Department of Ophthalmology, Stanford University, Palo Alto, CA
| | - Eric Nudleman
- Department of Ophthalmology, University of California, San Diego
| | | | | | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong
| | - Praveer Singh
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Susan Ostmo
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Malvina Eydelman
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Antonio Capone
- Associated Retinal Consultants, Oakland University William Beaumont School of Medicine, Royal Oak, Michigan, USA
| | | | | |
Collapse
|
18
|
Bai A, Carty C, Dai S. Performance of deep-learning artificial intelligence algorithms in detecting retinopathy of prematurity: A systematic review. SAUDI JOURNAL OF OPHTHALMOLOGY : OFFICIAL JOURNAL OF THE SAUDI OPHTHALMOLOGICAL SOCIETY 2022; 36:296-307. [PMID: 36276252 DOI: 10.4103/sjopt.sjopt_219_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/09/2021] [Accepted: 11/12/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE Artificial intelligence (AI) offers considerable promise for retinopathy of prematurity (ROP) screening and diagnosis. The development of deep-learning algorithms to detect the presence of disease may contribute to sufficient screening, early detection, and timely treatment for this preventable blinding disease. This review aimed to systematically examine the literature in AI algorithms in detecting ROP. Specifically, we focused on the performance of deep-learning algorithms through sensitivity, specificity, and area under the receiver operating curve (AUROC) for both the detection and grade of ROP. METHODS We searched Medline OVID, PubMed, Web of Science, and Embase for studies published from January 1, 2012, to September 20, 2021. Studies evaluating the diagnostic performance of deep-learning models based on retinal fundus images with expert ophthalmologists' judgment as reference standard were included. Studies which did not investigate the presence or absence of disease were excluded. Risk of bias was assessed using the QUADAS-2 tool. RESULTS Twelve studies out of the 175 studies identified were included. Five studies measured the performance of detecting the presence of ROP and seven studies determined the presence of plus disease. The average AUROC out of 11 studies was 0.98. The average sensitivity and specificity for detecting ROP was 95.72% and 98.15%, respectively, and for detecting plus disease was 91.13% and 95.92%, respectively. CONCLUSION The diagnostic performance of deep-learning algorithms in published studies was high. Few studies presented externally validated results or compared performance to expert human graders. Large scale prospective validation alongside robust study design could improve future studies.
Collapse
Affiliation(s)
- Amelia Bai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,Centre for Children's Health Research, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia
| | - Christopher Carty
- Griffith Centre of Biomedical and Rehabilitation Engineering (GCORE), Menzies Health Institute Queensland, Griffith University Gold Coast, Australia.,Department of Orthopaedics, Children's Health Queensland Hospital and Health Service, Queensland Children's Hospital, Brisbane, Australia
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia.,University of Queensland, Australia
| |
Collapse
|
19
|
Morrison SL, Dukhovny D, Chan RP, Chiang MF, Campbell JP. Cost-effectiveness of Artificial Intelligence-Based Retinopathy of Prematurity Screening. JAMA Ophthalmol 2022; 140:401-409. [PMID: 35297945 PMCID: PMC8931675 DOI: 10.1001/jamaophthalmol.2022.0223] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/20/2022] [Indexed: 11/14/2022]
Abstract
Importance Artificial intelligence (AI)-based retinopathy of prematurity (ROP) screening may improve ROP care, but its cost-effectiveness is unknown. Objective To evaluate the relative cost-effectiveness of autonomous and assistive AI-based ROP screening compared with telemedicine and ophthalmoscopic screening over a range of estimated probabilities, costs, and outcomes. Design, Setting, and Participants A cost-effectiveness analysis of AI ROP screening compared with ophthalmoscopy and telemedicine via economic modeling was conducted. Decision trees created and analyzed modeled outcomes and costs of 4 possible ROP screening strategies: ophthalmoscopy, telemedicine, assistive AI with telemedicine review, and autonomous AI with only positive screen results reviewed. A theoretical cohort of infants requiring ROP screening in the United States each year was analyzed. Main Outcomes and Measures Screening and treatment costs were based on Current Procedural Terminology codes and included estimated opportunity costs for physicians. Outcomes were based on the Early Treatment of ROP study, defined as timely treatment, late treatment, or correctly untreated. Incremental cost-effectiveness ratios were calculated at a willingness-to-pay threshold of $100 000. One-way and probabilistic sensitivity analyses were performed comparing AI strategies to telemedicine and ophthalmoscopy to evaluate the cost-effectiveness across a range of assumptions. In a secondary analysis, the modeling was repeated and assumed a higher sensitivity for detection of severe ROP using AI compared with ophthalmoscopy. Results This theoretical cohort included 52 000 infants born 30 weeks' gestation or earlier or weighed 1500 g or less at birth. Autonomous AI was as effective and less costly than any other screening strategy. AI-based ROP screening was cost-effective up to $7 for assistive and $34 for autonomous screening compared with telemedicine and $64 and $91 compared with ophthalmoscopy in the primary analysis. In the probabilistic sensitivity analysis, autonomous AI screening was more than 60% likely to be cost-effective at all willingness-to-pay levels vs other modalities. In a second simulated cohort with 99% sensitivity for AI, the number of late treatments for ROP decreased from 265 when ROP screening was performed with ophthalmoscopy to 40 using autonomous AI. Conclusions and Relevance AI-based screening for ROP may be more cost-effective than telemedicine and ophthalmoscopy, depending on the added cost of AI and the relative performance of AI vs human examiners detecting severe ROP. As AI-based screening for ROP is commercialized, care must be given to appropriately price the technology to ensure its benefits are fully realized.
Collapse
Affiliation(s)
- Steven L. Morrison
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Dmitry Dukhovny
- Department of Pediatrics, Oregon Health & Science University, Portland
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
20
|
Sen P, Bamel A. Commentary: Deep learning in retinopathy of prematurity: Where do we stand? Indian J Ophthalmol 2022; 70:1279. [PMID: 35326033 PMCID: PMC9240484 DOI: 10.4103/ijo.ijo_3036_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Affiliation(s)
- Parveen Sen
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| | - Arjun Bamel
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
21
|
Navarro-Blanco C, Pastora-Salvador N, Sánchez-Ramos C, Peralta-Calvo J. Assessment of non-expert ophthalmologists in the analysis of retinopathy of prematurity. ANALES DE PEDIATRÍA (ENGLISH EDITION) 2022; 96:147-148. [DOI: 10.1016/j.anpede.2020.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 10/31/2020] [Indexed: 11/27/2022] Open
|
22
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Coyner AS, Chen JS, Singh P, Schelonka RL, Jordan BK, McEvoy CT, Anderson JE, Chan RVP, Sonmez K, Erdogmus D, Chiang MF, Kalpathy-Cramer J, Campbell JP. Single-Examination Risk Prediction of Severe Retinopathy of Prematurity. Pediatrics 2021; 148:183427. [PMID: 34814160 PMCID: PMC8919718 DOI: 10.1542/peds.2021-051772] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/30/2021] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. Screening and treatment reduces this risk, but requires multiple examinations of infants, most of whom will not develop severe disease. Previous work has suggested that artificial intelligence may be able to detect incident severe disease (treatment-requiring retinopathy of prematurity [TR-ROP]) before clinical diagnosis. We aimed to build a risk model that combined artificial intelligence with clinical demographics to reduce the number of examinations without missing cases of TR-ROP. METHODS Infants undergoing routine ROP screening examinations (1579 total eyes, 190 with TR-ROP) were recruited from 8 North American study centers. A vascular severity score (VSS) was derived from retinal fundus images obtained at 32 to 33 weeks' postmenstrual age. Seven ElasticNet logistic regression models were trained on all combinations of birth weight, gestational age, and VSS. The area under the precision-recall curve was used to identify the highest-performing model. RESULTS The gestational age + VSS model had the highest performance (mean ± SD area under the precision-recall curve: 0.35 ± 0.11). On 2 different test data sets (n = 444 and n = 132), sensitivity was 100% (positive predictive value: 28.1% and 22.6%) and specificity was 48.9% and 80.8% (negative predictive value: 100.0%). CONCLUSIONS Using a single examination, this model identified all infants who developed TR-ROP, on average, >1 month before diagnosis with moderate to high specificity. This approach could lead to earlier identification of incident severe ROP, reducing late diagnosis and treatment while simultaneously reducing the number of ROP examinations and unnecessary physiologic stress for low-risk infants.
Collapse
Affiliation(s)
- Aaron S Coyner
- Ophthalmology, Oregon Health & Science University, Portland, OR;,Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR
| | - Jimmy S Chen
- Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, MA;,MGH & BWH Center for Clinical Data Science, Boston, MA
| | | | - Brian K Jordan
- Pediatrics, Oregon Health & Science University, Portland, OR
| | - Cindy T McEvoy
- Pediatrics, Oregon Health & Science University, Portland, OR
| | | | - RV Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Kemal Sonmez
- Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR
| | - Deniz Erdogmus
- Electrical and Computer Engineering, Northeastern University, Boston, MA
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, MA;,MGH & BWH Center for Clinical Data Science, Boston, MA
| | | |
Collapse
|
24
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
25
|
Campbell JP, Mathenge C, Cherwek H, Balaskas K, Pasquale LR, Keane PA, Chiang MF. Artificial Intelligence to Reduce Ocular Health Disparities: Moving From Concept to Implementation. Transl Vis Sci Technol 2021; 10:19. [PMID: 34003953 PMCID: PMC7991919 DOI: 10.1167/tvst.10.3.19] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Affiliation(s)
- John P Campbell
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - Ciku Mathenge
- Rwanda International Institute of Ophthalmology, Kigali, Rwanda
| | | | - Konstantinos Balaskas
- Institute of Ophthalmology, University College London, London, UK.,Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Louis R Pasquale
- Eye and Vision Research Institute, New York Eye and Ear Infirmary at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK.,Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Michael F Chiang
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA.,National Eye Institute, National Institute of Health, Bethesda, MD
| | | |
Collapse
|
26
|
A deep learning framework for the detection of Plus disease in retinal fundus images of preterm infants. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.02.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
27
|
Campbell JP, Singh P, Redd TK, Brown JM, Shah PK, Subramanian P, Rajan R, Valikodath N, Cole E, Ostmo S, Chan RVP, Venkatapathy N, Chiang MF, Kalpathy-Cramer J. Applications of Artificial Intelligence for Retinopathy of Prematurity Screening. Pediatrics 2021; 147:e2020016618. [PMID: 33637645 PMCID: PMC7924138 DOI: 10.1542/peds.2020-016618] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/29/2020] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability. METHODS External validation study of an existing AI-based quantitative severity scale for ROP on a data set of images from the Retinopathy of Prematurity Eradication Save Our Sight ROP telemedicine program in India. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors. RESULTS The area under the receiver operating characteristic curve for detection of treatment-requiring retinopathy of prematurity was 0.98, with 100% sensitivity and 78% specificity. We found higher median (interquartile range) ROP severity in NCUs without oxygen blenders and pulse oxygenation monitors, most apparent in bigger infants (>1500 g and 31 weeks' gestation: 2.7 [2.5-3.0] vs 3.1 [2.4-3.8]; P = .007, with adjustment for birth weight and gestational age). CONCLUSIONS Integration of AI into ROP screening programs may lead to improved access to care for secondary prevention of ROP and may facilitate assessment of disease epidemiology and NCU resources.
Collapse
Affiliation(s)
- J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute and
- Contributed equally as co-first authors
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
- Contributed equally as co-first authors
| | - Travis K Redd
- Department of Ophthalmology, Casey Eye Institute and
| | - James M Brown
- Department of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Parag K Shah
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Renu Rajan
- Department of Retina and Vitreous, Aravind Eye Hospital, Madurai, India; and
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute and
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | | | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute and
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| |
Collapse
|
28
|
Arima M, Fujii Y, Sonoda KH. Translational Research in Retinopathy of Prematurity: From Bedside to Bench and Back Again. J Clin Med 2021; 10:331. [PMID: 33477419 PMCID: PMC7830975 DOI: 10.3390/jcm10020331] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/09/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022] Open
Abstract
Retinopathy of prematurity (ROP), a vascular proliferative disease affecting preterm infants, is a leading cause of childhood blindness. Various studies have investigated the pathogenesis of ROP. Clinical experience indicates that oxygen levels are strongly correlated with ROP development, which led to the development of oxygen-induced retinopathy (OIR) as an animal model of ROP. OIR has been used extensively to investigate the molecular mechanisms underlying ROP and to evaluate the efficacy of new drug candidates. Large clinical trials have demonstrated the efficacy of anti-vascular endothelial growth factor (VEGF) agents to treat ROP, and anti-VEGF therapy is presently becoming the first-line treatment worldwide. Anti-VEGF therapy has advantages over conventional treatments, including being minimally invasive with a low risk of refractive error. However, long-term safety concerns and the risk of late recurrence limit this treatment. There is an unmet medical need for novel ROP therapies, which need to be addressed by safe and minimally invasive therapies. The recent progress in biotechnology has contributed greatly to translational research. In this review, we outline how basic ROP research has evolved with clinical experience and the subsequent emergence of new drugs. We discuss previous and ongoing trials and present the candidate molecules expected to become novel targets.
Collapse
Affiliation(s)
- Mitsuru Arima
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
- Center for Clinical and Translational Research, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka 8128582, Japan
| | - Yuya Fujii
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
| | - Koh-Hei Sonoda
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
| |
Collapse
|
29
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
30
|
Cole E, Valikodath NG, Maa A, Chan RVP, Chiang MF, Lee AY, Tu DC, Hwang TS. Bringing Ophthalmic Graduate Medical Education into the 2020s with Information Technology. Ophthalmology 2020; 128:349-353. [PMID: 33358411 DOI: 10.1016/j.ophtha.2020.11.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 11/12/2020] [Accepted: 11/16/2020] [Indexed: 10/22/2022] Open
|
31
|
Navarro-Blanco C, Pastora-Salvador N, Sánchez-Ramos C, Peralta-Calvo J. [Assessment of non-expert ophthalmologists in the analysis of retinopathy of prematurity]. An Pediatr (Barc) 2020; 96:S1695-4033(20)30479-3. [PMID: 33342689 DOI: 10.1016/j.anpedi.2020.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/24/2020] [Accepted: 10/31/2020] [Indexed: 11/24/2022] Open
Affiliation(s)
| | | | - Celia Sánchez-Ramos
- Grupo de Neuro-Computación y Neuro-Robótica, Universidad Complutense de Madrid, Madrid, España
| | - Jesús Peralta-Calvo
- Departamento de Oftalmología Infantil, Hospital Universitario La Paz, Madrid, España
| |
Collapse
|