1
|
Issa M, Sukkarieh G, Gallardo M, Sarbout I, Bonnin S, Tadayoni R, Milea D. Applications of artificial intelligence to inherited retinal diseases: A systematic review. Surv Ophthalmol 2025; 70:255-264. [PMID: 39566565 DOI: 10.1016/j.survophthal.2024.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/07/2024] [Accepted: 11/13/2024] [Indexed: 11/22/2024]
Abstract
Artificial intelligence(AI)-based methods have been extensively used for the detection and management of various common retinal conditions, but their targeted development for inherited retinal diseases (IRD) is still nascent. In the context of limited availability of retinal subspecialists, genetic testing and genetic counseling, there is a high need for accurate and accessible diagnostic methods. The currently available AI studies, aiming for detection, classification, and prediction of IRD, remain mainly retrospective and include relatively limited numbers of patients due to their scarcity. We summarize the latest findings and clinical implications of machine-learning algorithms in IRD, highlighting the achievements and challenges of AI to assist ophthalmologists in their clinical practice.
Collapse
Affiliation(s)
| | | | | | - Ilias Sarbout
- Rothschild Foundation Hospital, Paris, France; Sorbonne University, France.
| | | | - Ramin Tadayoni
- Rothschild Foundation Hospital, Paris, France; Ophthalmology Department, Université Paris Cité, AP-HP, Hôpital Lariboisière, Paris, France
| | - Dan Milea
- Rothschild Foundation Hospital, Paris, France; Singapore Eye Research Institute, Singapore; Copenhagen University, Denmark; Angers University Hospital, Angers, France; Duke-NUS Medical School, Singapore.
| |
Collapse
|
2
|
Woof WA, de Guimarães TA, Al-Khuzaei S, Daich Varela M, Sen S, Bagga P, Mendes B, Shah M, Burke P, Parry D, Lin S, Naik G, Ghoshal B, Liefers BJ, Fu DJ, Georgiou M, Nguyen Q, Sousa da Silva A, Liu Y, Fujinami-Yokokawa Y, Sumodhee D, Patel P, Furman J, Moghul I, Moosajee M, Sallum J, De Silva SR, Lorenz B, Holz FG, Fujinami K, Webster AR, Mahroo OA, Downes SM, Madhusudhan S, Balaskas K, Michaelides M, Pontikos N. Quantification of Fundus Autofluorescence Features in a Molecularly Characterized Cohort of >3500 Patients with Inherited Retinal Disease from the United Kingdom. OPHTHALMOLOGY SCIENCE 2025; 5:100652. [PMID: 39896422 PMCID: PMC11782848 DOI: 10.1016/j.xops.2024.100652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 10/29/2024] [Accepted: 11/04/2024] [Indexed: 02/04/2025]
Abstract
Purpose To quantify relevant fundus autofluorescence (FAF) features cross-sectionally and longitudinally in a large cohort of patients with inherited retinal diseases (IRDs). Design Retrospective study of imaging data. Participants Patients with a clinical and molecularly confirmed diagnosis of IRD who have undergone 55° FAF imaging at Moorfields Eye Hospital (MEH) and the Royal Liverpool Hospital between 2004 and 2019. Methods Five FAF features of interest were defined: vessels, optic disc, perimacular ring of increased signal (ring), relative hypo-autofluorescence (hypo-AF), and hyper-autofluorescence (hyper-AF). Features were manually annotated by 6 graders in a subset of patients based on a defined grading protocol to produce segmentation masks to train an artificial intelligence model, AIRDetect, which was then applied to the entire imaging data set. Main Outcome Measures Quantitative FAF features, including area and vessel metrics, were analyzed cross-sectionally by gene and age, and longitudinally. AIRDetect feature segmentation and detection were validated with Dice score and precision/recall, respectively. Results A total of 45 749 FAF images from 3606 patients with IRD from MEH covering 170 genes were automatically segmented using AIRDetect. Model-grader Dice scores for the disc, hypo-AF, hyper-AF, ring, and vessels were, respectively, 0.86, 0.72, 0.69, 0.68, and 0.65. Across patients at presentation, the 5 genes with the largest hypo-AF areas were CHM, ABCC6, RDH12, ABCA4, and RPE65, with mean per-patient areas of 43.72, 29.57, 20.07, 19.65, and 16.92 mm2, respectively. The 5 genes with the largest hyper-AF areas were BEST1, CDH23, NR2E3, MYO7A, and RDH12, with mean areas of 0.50, 047, 0.44, 0.38, and 0.33 mm2, respectively. The 5 genes with the largest ring areas were NR2E3, CDH23, CRX, EYS, and PDE6B, with mean areas of 3.60, 2.90, 2.89, 2.56, and 2.20 mm2, respectively. Vessel density was found to be highest in EFEMP1, BEST1, TIMP3, RS1, and PRPH2 (11.0%, 10.4%, 10.1%, 10.1%, 9.2%) and was lower in retinitis pigmentosa (RP) and Leber congenital amaurosis genes. Longitudinal analysis of decreasing ring area in 4 RP genes (RPGR, USH2A, RHO, and EYS) found EYS to be the fastest progressor at -0.178 mm2/year. Conclusions We have conducted the first large-scale cross-sectional and longitudinal quantitative analysis of FAF features across a diverse range of IRDs using a novel AI approach. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- William A. Woof
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Thales A.C. de Guimarães
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Saoud Al-Khuzaei
- Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom
- Oxford Eye Hospital, John Radcliffe Hospital, Oxford, United Kingdom
| | - Malena Daich Varela
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Sagnik Sen
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Pallavi Bagga
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Bernardo Mendes
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Mital Shah
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Paula Burke
- St Paul’s Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom
| | - David Parry
- St Paul’s Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom
| | - Siying Lin
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Gunjan Naik
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Biraja Ghoshal
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Bart J. Liefers
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Department of Ophthalmology and Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Dun Jack Fu
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Michalis Georgiou
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Quang Nguyen
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Yichen Liu
- University College London Institute of Ophthalmology, London, United Kingdom
| | - Yu Fujinami-Yokokawa
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Dayyanah Sumodhee
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Praveen Patel
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Jennifer Furman
- University College London Institute of Ophthalmology, London, United Kingdom
| | - Ismail Moghul
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Mariya Moosajee
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Juliana Sallum
- Department of Ophthalmology and Visual Sciences, Escola Paulista de Medicina, Federal University of Sao Paulo, Brazil
| | - Samantha R. De Silva
- Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom
- Oxford Eye Hospital, John Radcliffe Hospital, Oxford, United Kingdom
| | - Birgit Lorenz
- Transmit Centre of Translational Ophthalmology, Justus-Liebig-University Giessen, Germany
| | - Frank G. Holz
- Department of Ophthalmology, University Hospital Bonn, Bonn, Germany
| | - Kaoru Fujinami
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Andrew R. Webster
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Omar A. Mahroo
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Susan M. Downes
- Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom
- Oxford Eye Hospital, John Radcliffe Hospital, Oxford, United Kingdom
| | - Savita Madhusudhan
- St Paul’s Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom
| | - Konstantinos Balaskas
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Michel Michaelides
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Nikolas Pontikos
- University College London Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
3
|
Kominami T, Ueno S, Ota J, Inooka T, Oda M, Mori K, Nishiguchi KM. Classification of fundus autofluorescence images based on macular function in retinitis pigmentosa using convolutional neural networks. Jpn J Ophthalmol 2025; 69:236-244. [PMID: 39937339 PMCID: PMC12003438 DOI: 10.1007/s10384-025-01163-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 12/07/2024] [Indexed: 02/13/2025]
Abstract
PURPOSE To determine whether convolutional neural networks (CNN) can classify the severity of central vision loss using fundus autofluorescence (FAF) images and color fundus images of retinitis pigmentosa (RP), and to evaluate the utility of those images for severity classification. STUDY DESIGN Retrospective observational study. METHODS Medical charts of patients with RP who visited Nagoya University Hospital were reviewed. Eyes with atypical RP or previous surgery were excluded. The mild group was comprised of patients with a mean deviation value of > - 10 decibels, and the severe group of < - 20 decibels, in the Humphrey field analyzer 10-2 program. CNN models were created by transfer learning of VGG16 pretrained with ImageNet to classify patients as either mild or severe, using FAF images or color fundus images. RESULTS Overall, 165 patients were included in this study; 80 patients were classified into the severe and 85 into the mild group. The test data comprised 40 patients in each group, and the images of the remaining patients were used as training data, with data augmentation by rotation and flipping. The highest accuracies of the CNN models when using color fundus and FAF images were 63.75% and 87.50%, respectively. CONCLUSION Using FAF images may enable the accurate assessment of central vision function in RP. FAF images may include more parameters than color fundus images that can evaluate central visual function.
Collapse
Affiliation(s)
- Taro Kominami
- Department of Ophthalmology, Nagoya University Graduate School of Medicine, 65 Tsuruma- cho, Showa-ku, Nagoya, 466-8550, Japan.
| | - Shinji Ueno
- Department of Ophthalmology, Nagoya University Graduate School of Medicine, 65 Tsuruma- cho, Showa-ku, Nagoya, 466-8550, Japan
- Department of Ophthalmology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Junya Ota
- Department of Ophthalmology, Nagoya University Graduate School of Medicine, 65 Tsuruma- cho, Showa-ku, Nagoya, 466-8550, Japan
| | - Taiga Inooka
- Department of Ophthalmology, Nagoya University Graduate School of Medicine, 65 Tsuruma- cho, Showa-ku, Nagoya, 466-8550, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
- Information Technology Center, Nagoya University, Nagoya, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
- Information Technology Center, Nagoya University, Nagoya, Japan
- Research Center for Medical Bigdata, National Institute of Informatics, Nagoya, Japan
| | - Koji M Nishiguchi
- Department of Ophthalmology, Nagoya University Graduate School of Medicine, 65 Tsuruma- cho, Showa-ku, Nagoya, 466-8550, Japan
| |
Collapse
|
4
|
Yeo EYH, Kominami T, Tan TE, Babu L, Ong KGS, Tan W, Bylstra YM, Jain K, Tang RWC, Farooqui SZ, Kam SPR, Chan CM, Mathur RS, Jamuar SS, Lim WK, Nishiguchi K, Fenner BJ. Phenotypic Distinctions Between EYS- and USH2A-Associated Retinitis Pigmentosa in an Asian Population. Transl Vis Sci Technol 2025; 14:16. [PMID: 39932467 DOI: 10.1167/tvst.14.2.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/14/2025] Open
Abstract
Purpose This study compares clinical characteristics of retinitis pigmentosa (RP) associated with mutations in the EYS and USH2A genes in a Southeast Asian cohort. Methods Prospective single-center study of families with EYS- or USH2A-associated RP seen at the Singapore National Eye Centre. Comprehensive ophthalmic evaluations, multimodal imaging, genetic testing, and longitudinal follow-up identified clinically useful differentiating features between the two genotypes. Results A total of 300 families with RP were enrolled, with EYS- and USH2A-associated RP, accounting for 24.7% of all probands and 50.7% of solved or likely solved cases. USH2A cases were predominantly nonsyndromic RP (75%). EYS-associated RP was more severe in functional and structural outcomes, and patients were more myopic than USH2A (SE -3.31 vs. -0.69; P < 0.0001). EYS RP displayed peripapillary nasal sparing on autofluorescence imaging more frequently than USH2A (57.6% vs. 26.7%; P = 0.006), whereas USH2A cases more often had a parafoveal ring (73.3% vs. 30.3%; P = 0.0002). Multiple logistic regression identified diagnostic features with 83.2% accuracy in distinguishing between EYS and USH2A, validated in a second unrelated clinical cohort. Conclusions EYS- and USH2A-associated RP have overlapping clinical presentations but can often be distinguished based on a constellation of phenotypic features including disease onset and severity, refractive error, and fundus autofluorescence. These diagnostic features may support a more effective diagnostic strategy for these common forms of RP. Translational Relevance Distinct clinical features differentiating EYS- and USH2A-associated RP provide valuable diagnostic tools that may inform personalized management and facilitate targeted interventions in clinical practice.
Collapse
Affiliation(s)
| | - Taro Kominami
- Department of Ophthalmology, Nagoya University Hospital, Japan
| | - Tien-En Tan
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore
| | | | | | - Weilun Tan
- Singapore National Eye Centre, Singapore
| | - Yasmin M Bylstra
- Institute for Precision Medicine, Duke-NUS Graduate Medical School, Singapore
| | | | | | - Saadia Z Farooqui
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore
| | - Sylvia P R Kam
- Genetics Service, Department of Paediatric Medicine, KK Women's and Children's Hospital, Singapore
| | - Choi-Mun Chan
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore
| | - Ranjana S Mathur
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore
| | - Saumya S Jamuar
- Institute for Precision Medicine, Duke-NUS Graduate Medical School, Singapore
- Genetics Service, Department of Paediatric Medicine, KK Women's and Children's Hospital, Singapore
| | - Weng Khong Lim
- Institute for Precision Medicine, Duke-NUS Graduate Medical School, Singapore
- Genome Institute of Singapore, Singapore
| | - Koji Nishiguchi
- Department of Ophthalmology, Nagoya University Hospital, Japan
| | - Beau J Fenner
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Graduate Medical School, Singapore
| |
Collapse
|
5
|
Acero Ruge LM, Vásquez Lesmes DA, Hernández Rincón EH, Avella Pérez LP. [Artificial intelligence for the comprehensive approach to orphan/rare diseases: A scoping review]. Semergen 2024; 51:102434. [PMID: 39733637 DOI: 10.1016/j.semerg.2024.102434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 11/05/2024] [Accepted: 11/12/2024] [Indexed: 12/31/2024]
Abstract
INTRODUCTION Orphan diseases (OD) are rare but collectively common, presenting challenges such as late diagnoses, disease progression, and limited therapeutic options. Recently, artificial intelligence (AI) has gained interest in the research of these diseases. OBJECTIVE To synthesize the available evidence on the use of AI in the comprehensive approach to orphan diseases. METHODS An exploratory systematic review of the Scoping Review type was conducted in PubMed, Bireme, and Scopus from 2019 to 2024. RESULTS fifty-six articles were identified, with 21.4% being experimental studies; 28 documents did not specify an OD, 8 documents focused primarily on genetic diseases; 53.57% focused on diagnosis, and 36 different algorithms were identified. CONCLUSIONS The information found shows the development of AI algorithms in different clinical settings, confirming the potential benefits in diagnosis times, therapeutic options, and greater awareness among health professionals.
Collapse
Affiliation(s)
- L M Acero Ruge
- Medicina Familiar y Comunitaria, Universidad de La Sabana, Facultad de Medicina, Chía, Colombia
| | - D A Vásquez Lesmes
- Medicina Familiar y Comunitaria, Universidad de La Sabana, Facultad de Medicina, Chía, Colombia
| | - E H Hernández Rincón
- Departamento de Medicina Familiar y Salud Pública, Facultad de Medicina, Universidad de La Sabana, Chía, Colombia.
| | | |
Collapse
|
6
|
Pennesi ME, Wang YZ, Birch DG. Deep learning aided measurement of outer retinal layer metrics as biomarkers for inherited retinal degenerations: opportunities and challenges. Curr Opin Ophthalmol 2024; 35:447-454. [PMID: 39259656 DOI: 10.1097/icu.0000000000001088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW The purpose of this review was to provide a summary of currently available retinal imaging and visual function testing methods for assessing inherited retinal degenerations (IRDs), with the emphasis on the application of deep learning (DL) approaches to assist the determination of structural biomarkers for IRDs. RECENT FINDINGS (clinical trials for IRDs; discover effective biomarkers as endpoints; DL applications in processing retinal images to detect disease-related structural changes). SUMMARY Assessing photoreceptor loss is a direct way to evaluate IRDs. Outer retinal layer structures, including outer nuclear layer, ellipsoid zone, photoreceptor outer segment, RPE, are potential structural biomarkers for IRDs. More work may be needed on structure and function relationship.
Collapse
Affiliation(s)
- Mark E Pennesi
- Retina Foundation of the Southwest, Dallas, Texas
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| | - David G Birch
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| |
Collapse
|
7
|
Lee BJH, Sun CZY, Ong CJT, Jain K, Tan TE, Chan CM, Mathur RS, Tang RWC, Bylstra Y, Kam SPR, Lim WK, Fenner BJ. Utility of multimodal imaging in the clinical diagnosis of inherited retinal degenerations. Taiwan J Ophthalmol 2024; 14:486-496. [PMID: 39803408 PMCID: PMC11717338 DOI: 10.4103/tjo.tjo-d-24-00066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 08/25/2024] [Indexed: 01/16/2025] Open
Abstract
Inherited retinal degeneration (IRD) is a heterogeneous group of genetic disorders of variable onset and severity, with vision loss being a common endpoint in most cases. More than 50 distinct IRD phenotypes and over 280 causative genes have been described. Establishing a clinical phenotype for patients with IRD is particularly challenging due to clinical variability even among patients with similar genotypes. Clinical phenotyping provides a foundation for understanding disease progression and informing subsequent genetic investigations. Establishing a clear clinical phenotype for IRD cases is required to corroborate the data obtained from exome and genome sequencing, which often yields numerous variants in genes associated with IRD. In the current work, we review the use of contemporary retinal imaging modalities, including ultra-widefield and autofluorescence imaging, optical coherence tomography, and multispectral imaging, in the diagnosis of IRD.
Collapse
Affiliation(s)
- Brian J. H. Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Christopher Z. Y. Sun
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| | - Charles J. T. Ong
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| | | | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| | - Choi Mun Chan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| | - Ranjana S. Mathur
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| | - Rachael W. C. Tang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Yasmin Bylstra
- SingHealth-Duke-NUS Genomic Medicine Centre, Institute of Precision Medicine, Singapore
| | - Sylvia P. R. Kam
- Department of Paediatrics, KK Women’s and Children’s Hospital, Singapore
| | - Weng Khong Lim
- SingHealth-Duke-NUS Genomic Medicine Centre, Institute of Precision Medicine, Singapore
- SingHealth Duke-NUS Genomic Medicine Centre, Singapore
- Cancer and Stem Cell Biology Program, Duke-NUS Medical School, Singapore
- Genome Institute of Singapore, Agency for Science, Technology and Research, Singapore
| | - Beau J. Fenner
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Clinical Academic Program, Duke-NUS Graduate Medical School, Singapore
| |
Collapse
|
8
|
Zhang H, Zhang K, Wang J, Yu S, Li Z, Yin S, Zhu J, Wei W. Quickly diagnosing Bietti crystalline dystrophy with deep learning. iScience 2024; 27:110579. [PMID: 39220263 PMCID: PMC11365386 DOI: 10.1016/j.isci.2024.110579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/18/2024] [Accepted: 07/22/2024] [Indexed: 09/04/2024] Open
Abstract
Bietti crystalline dystrophy (BCD) is an autosomal recessive inherited retinal disease (IRD) and its early precise diagnosis is much challenging. This study aims to diagnose BCD and classify the clinical stage based on ultra-wide-field (UWF) color fundus photographs (CFPs) via deep learning (DL). All CFPs were labeled as BCD, retinitis pigmentosa (RP) or normal, and the BCD patients were further divided into three stages. DL models ResNeXt, Wide ResNet, and ResNeSt were developed, and model performance was evaluated using accuracy and confusion matrix. Then the diagnostic interpretability was verified by the heatmaps. The models achieved good classification results. Our study established the largest BCD database of Chinese population. We developed a quick diagnosing method for BCD and evaluated the potential efficacy of an automatic diagnosis and grading DL algorithm based on UWF fundus photography in a Chinese cohort of BCD patients.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- Chongqing Chang’an Industrial Group Co. Ltd, Chongqing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Shicheng Yu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou 510060, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shiyi Yin
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingyuan Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
9
|
Woof W, de Guimarães TAC, Al-Khuzaei S, Daich Varela M, Sen S, Bagga P, Mendes B, Shah M, Burke P, Parry D, Lin S, Naik G, Ghoshal B, Liefers B, Fu DJ, Georgiou M, Nguyen Q, da Silva AS, Liu Y, Fujinami-Yokokawa Y, Sumodhee D, Patel P, Furman J, Moghul I, Moosajee M, Sallum J, De Silva SR, Lorenz B, Holz F, Fujinami K, Webster AR, Mahroo O, Downes SM, Madhusudhan S, Balaskas K, Michaelides M, Pontikos N. Quantification of Fundus Autofluorescence Features in a Molecularly Characterized Cohort of More Than 3500 Inherited Retinal Disease Patients from the United Kingdom. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.24.24304809. [PMID: 38585957 PMCID: PMC10996753 DOI: 10.1101/2024.03.24.24304809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Purpose To quantify relevant fundus autofluorescence (FAF) image features cross-sectionally and longitudinally in a large cohort of inherited retinal diseases (IRDs) patients. Design Retrospective study of imaging data (55-degree blue-FAF on Heidelberg Spectralis) from patients. Participants Patients with a clinical and molecularly confirmed diagnosis of IRD who have undergone FAF 55-degree imaging at Moorfields Eye Hospital (MEH) and the Royal Liverpool Hospital (RLH) between 2004 and 2019. Methods Five FAF features of interest were defined: vessels, optic disc, perimacular ring of increased signal (ring), relative hypo-autofluorescence (hypo-AF) and hyper-autofluorescence (hyper-AF). Features were manually annotated by six graders in a subset of patients based on a defined grading protocol to produce segmentation masks to train an AI model, AIRDetect, which was then applied to the entire MEH imaging dataset. Main Outcome Measures Quantitative FAF imaging features including area in mm 2 and vessel metrics, were analysed cross-sectionally by gene and age, and longitudinally to determine rate of progression. AIRDetect feature segmentation and detection were validated with Dice score and precision/recall, respectively. Results A total of 45,749 FAF images from 3,606 IRD patients from MEH covering 170 genes were automatically segmented using AIRDetect. Model-grader Dice scores for disc, hypo-AF, hyper-AF, ring and vessels were respectively 0.86, 0.72, 0.69, 0.68 and 0.65. The five genes with the largest hypo-AF areas were CHM , ABCC6 , ABCA4 , RDH12 , and RPE65 , with mean per-patient areas of 41.5, 30.0, 21.9, 21.4, and 15.1 mm 2 . The five genes with the largest hyper-AF areas were BEST1 , CDH23 , RDH12 , MYO7A , and NR2E3 , with mean areas of 0.49, 0.45, 0.44, 0.39, and 0.34 mm 2 respectively. The five genes with largest ring areas were CDH23 , NR2E3 , CRX , EYS and MYO7A, with mean areas of 3.63, 3.32, 2.84, 2.39, and 2.16 mm 2 . Vessel density was found to be highest in EFEMP1 , BEST1 , TIMP3 , RS1 , and PRPH2 (10.6%, 10.3%, 9.8%, 9.7%, 8.9%) and was lower in Retinitis Pigmentosa (RP) and Leber Congenital Amaurosis genes. Longitudinal analysis of decreasing ring area in four RP genes ( RPGR, USH2A, RHO, EYS ) found EYS to be the fastest progressor at -0.18 mm 2 /year. Conclusions We have conducted the first large-scale cross-sectional and longitudinal quantitative analysis of FAF features across a diverse range of IRDs using a novel AI approach.
Collapse
|
10
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
11
|
Musleh AM, AlRyalat SA, Abid MN, Salem Y, Hamila HM, Sallam AB. Diagnostic accuracy of artificial intelligence in detecting retinitis pigmentosa: A systematic review and meta-analysis. Surv Ophthalmol 2024; 69:411-417. [PMID: 38042377 DOI: 10.1016/j.survophthal.2023.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/04/2023]
Abstract
Retinitis pigmentosa (RP) is often undetected in its early stages. Artificial intelligence (AI) has emerged as a promising tool in medical diagnostics. Therefore, we conducted a systematic review and meta-analysis to evaluate the diagnostic accuracy of AI in detecting RP using various ophthalmic images. We conducted a systematic search on PubMed, Scopus, and Web of Science databases on December 31, 2022. We included studies in the English language that used any ophthalmic imaging modality, such as OCT or fundus photography, used any AI technologies, had at least an expert in ophthalmology as a reference standard, and proposed an AI algorithm able to distinguish between images with and without retinitis pigmentosa features. We considered the sensitivity, specificity, and area under the curve (AUC) as the main measures of accuracy. We had a total of 14 studies in the qualitative analysis and 10 studies in the quantitative analysis. In total, the studies included in the meta-analysis dealt with 920,162 images. Overall, AI showed an excellent performance in detecting RP with pooled sensitivity and specificity of 0.985 [95%CI: 0.948-0.996], 0.993 [95%CI: 0.982-0.997] respectively. The area under the receiver operating characteristic (AUROC), using a random-effect model, was calculated to be 0.999 [95%CI: 0.998-1.000; P < 0.001]. The Zhou and Dendukuri I² test revealed a low level of heterogeneity between the studies, with [I2 = 19.94%] for sensitivity and [I2 = 21.07%] for specificity. The bivariate I² [20.33%] also suggested a low degree of heterogeneity. We found evidence supporting the accuracy of AI in the detection of RP; however, the level of heterogeneity between the studies was low.
Collapse
Affiliation(s)
| | - Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan; Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, USA.
| | - Mohammad Naim Abid
- Marka Specialty Hospital, Amman, Jordan; Valley Retina Institute, P.A., McAllen, TX, USA
| | - Yahia Salem
- Faculty of Medicine, The University of Jordan, Amman, Jordan
| | | | - Ahmed B Sallam
- Harvey and Bernice Jones Eye Institute at the University of Arkansas for Medical Sciences (UAMS), Little Rock, AR, USA
| |
Collapse
|
12
|
Wang WC, Huang CH, Chung HH, Chen PL, Hu FR, Yang CH, Yang CM, Lin CW, Hsu CC, Chen TC. Metabolomics facilitates differential diagnosis in common inherited retinal degenerations by exploring their profiles of serum metabolites. Nat Commun 2024; 15:3562. [PMID: 38670966 PMCID: PMC11053129 DOI: 10.1038/s41467-024-47911-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 04/16/2024] [Indexed: 04/28/2024] Open
Abstract
The diagnosis of inherited retinal degeneration (IRD) is challenging owing to its phenotypic and genotypic complexity. Clinical information is important before a genetic diagnosis is made. Metabolomics studies the entire picture of bioproducts, which are determined using genetic codes and biological reactions. We demonstrated that the common diagnoses of IRD, including retinitis pigmentosa (RP), cone-rod dystrophy (CRD), Stargardt disease (STGD), and Bietti's crystalline dystrophy (BCD), could be differentiated based on their metabolite heatmaps. Hundreds of metabolites were identified in the volcano plot compared with that of the control group in every IRD except BCD, considered as potential diagnosing markers. The phenotypes of CRD and STGD overlapped but could be differentiated by their metabolomic features with the assistance of a machine learning model with 100% accuracy. Moreover, EYS-, USH2A-associated, and other RP, sharing considerable similar characteristics in clinical findings, could also be diagnosed using the machine learning model with 85.7% accuracy. Further study would be needed to validate the results in an external dataset. By incorporating mass spectrometry and machine learning, a metabolomics-based diagnostic workflow for the clinical and molecular diagnoses of IRD was proposed in our study.
Collapse
Affiliation(s)
- Wei-Chieh Wang
- Department of Chemistry, National Taiwan University, Taipei, Taiwan
| | - Chu-Hsuan Huang
- Department of Ophthalmology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Pei-Lung Chen
- Graduate Institute of Medical Genomics and Proteomics, College of Medicine, National Taiwan University, Taipei, Taiwan
- Department of Medical Genetics, National Taiwan University Hospital, Taipei, Taiwan
| | - Fung-Rong Hu
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chang-Hao Yang
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chung-May Yang
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chao-Wen Lin
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Cheng-Chih Hsu
- Department of Chemistry, National Taiwan University, Taipei, Taiwan.
- Leeuwenhoek Laboratories Co. Ltd, Taipei, Taiwan.
| | - Ta-Ching Chen
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan.
- Center of Frontier Medicine, National Taiwan University Hospital, Taipei, Taiwan.
- Research Center for Developmental Biology and Regenerative Medicine, National Taiwan University, Taipei, Taiwan.
- Department of Medical Research, National Taiwan University Hospital, Taipei, Taiwan.
| |
Collapse
|
13
|
Lee DK, Choi YJ, Lee SJ, Kang HG, Park YR. Development of a deep learning model to distinguish the cause of optic disc atrophy using retinal fundus photography. Sci Rep 2024; 14:5079. [PMID: 38429319 PMCID: PMC10907364 DOI: 10.1038/s41598-024-55054-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/20/2024] [Indexed: 03/03/2024] Open
Abstract
The differential diagnosis for optic atrophy can be challenging and requires expensive, time-consuming ancillary testing to determine the cause. While Leber's hereditary optic neuropathy (LHON) and optic neuritis (ON) are both clinically significant causes for optic atrophy, both relatively rare in the general population, contributing to limitations in obtaining large imaging datasets. This study therefore aims to develop a deep learning (DL) model based on small datasets that could distinguish the cause of optic disc atrophy using only fundus photography. We retrospectively reviewed fundus photographs of 120 normal eyes, 30 eyes (15 patients) with genetically-confirmed LHON, and 30 eyes (26 patients) with ON. Images were split into a training dataset and a test dataset and used for model training with ResNet-18. To visualize the critical regions in retinal photographs that are highly associated with disease prediction, Gradient-Weighted Class Activation Map (Grad-CAM) was used to generate image-level attention heat maps and to enhance the interpretability of the DL system. In the 3-class classification of normal, LHON, and ON, the area under the receiver operating characteristic curve (AUROC) was 1.0 for normal, 0.988 for LHON, and 0.990 for ON, clearly differentiating each class from the others with an overall total accuracy of 0.93. Specifically, when distinguishing between normal and disease cases, the precision, recall, and F1 scores were perfect at 1.0. Furthermore, in the differentiation of LHON from other conditions, ON from others, and between LHON and ON, we consistently observed precision, recall, and F1 scores of 0.8. The model performance was maintained until only 10% of the pixel values of the image, identified as important by Grad-CAM, were preserved and the rest were masked, followed by retraining and evaluation.
Collapse
Affiliation(s)
- Dong Kyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Young Jo Choi
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Seung Jae Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Hyun Goo Kang
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Yu Rang Park
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| |
Collapse
|
14
|
Eckardt F, Mittas R, Horlava N, Schiefelbein J, Asani B, Michalakis S, Gerhardt M, Priglinger C, Keeser D, Koutsouleris N, Priglinger S, Theis F, Peng T, Schworm B. Deep Learning-Based Retinal Layer Segmentation in Optical Coherence Tomography Scans of Patients with Inherited Retinal Diseases. Klin Monbl Augenheilkd 2024. [PMID: 38086412 DOI: 10.1055/a-2227-3742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
BACKGROUND In optical coherence tomography (OCT) scans of patients with inherited retinal diseases (IRDs), the measurement of the thickness of the outer nuclear layer (ONL) has been well established as a surrogate marker for photoreceptor preservation. Current automatic segmentation tools fail in OCT segmentation in IRDs, and manual segmentation is time-consuming. METHODS AND MATERIAL Patients with IRD and an available OCT scan were screened for the present study. Additionally, OCT scans of patients without retinal disease were included to provide training data for artificial intelligence (AI). We trained a U-net-based model on healthy patients and applied a domain adaption technique to the IRD patients' scans. RESULTS We established an AI-based image segmentation algorithm that reliably segments the ONL in OCT scans of IRD patients. In a test dataset, the dice score of the algorithm was 98.7%. Furthermore, we generated thickness maps of the full retinal thickness and the ONL layer for each patient. CONCLUSION Accurate segmentation of anatomical layers on OCT scans plays a crucial role for predictive models linking retinal structure to visual function. Our algorithm for segmentation of OCT images could provide the basis for further studies on IRDs.
Collapse
Affiliation(s)
- Franziska Eckardt
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Robin Mittas
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Nastassya Horlava
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | | | - Ben Asani
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stylianos Michalakis
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maximilian Gerhardt
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claudia Priglinger
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Daniel Keeser
- Department of Psychiatry und Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - Nikolaos Koutsouleris
- Department of Psychiatry und Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - Siegfried Priglinger
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Fabian Theis
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Tingying Peng
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Benedikt Schworm
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
15
|
Liu TYA, Ling C, Hahn L, Jones CK, Boon CJ, Singh MS. Prediction of visual impairment in retinitis pigmentosa using deep learning and multimodal fundus images. Br J Ophthalmol 2023; 107:1484-1489. [PMID: 35896367 PMCID: PMC10579177 DOI: 10.1136/bjo-2021-320897] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 06/25/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The efficiency of clinical trials for retinitis pigmentosa (RP) treatment is limited by the screening burden and lack of reliable surrogate markers for functional end points. Automated methods to determine visual acuity (VA) may help address these challenges. We aimed to determine if VA could be estimated using confocal scanning laser ophthalmoscopy (cSLO) imaging and deep learning (DL). METHODS Snellen corrected VA and cSLO imaging were obtained retrospectively. The Johns Hopkins University (JHU) dataset was used for 10-fold cross-validations and internal testing. The Amsterdam University Medical Centers (AUMC) dataset was used for external independent testing. Both datasets had the same exclusion criteria: visually significant media opacities and images not centred on the central macula. The JHU dataset included patients with RP with and without molecular confirmation. The AUMC dataset only included molecularly confirmed patients with RP. Using transfer learning, three versions of the ResNet-152 neural network were trained: infrared (IR), optical coherence tomography (OCT) and combined image (CI). RESULTS In internal testing (JHU dataset, 2569 images, 462 eyes, 231 patients), the area under the curve (AUC) for the binary classification task of distinguishing between Snellen VA 20/40 or better and worse than Snellen VA 20/40 was 0.83, 0.87 and 0.85 for IR, OCT and CI, respectively. In external testing (AUMC dataset, 349 images, 166 eyes, 83 patients), the AUC was 0.78, 0.87 and 0.85 for IR, OCT and CI, respectively. CONCLUSIONS Our algorithm showed robust performance in predicting visual impairment in patients with RP, thus providing proof-of-concept for predicting structure-function correlation based solely on cSLO imaging in patients with RP.
Collapse
Affiliation(s)
- Tin Yan Alvin Liu
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Carlthan Ling
- Department of Ophthalmology, University of Maryland Medical System, Baltimore, Maryland, USA
| | - Leo Hahn
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
| | - Craig K Jones
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Camiel Jf Boon
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Mandeep S Singh
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
- Department of Genetic Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
16
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
17
|
Muchuchuti S, Viriri S. Retinal Disease Detection Using Deep Learning Techniques: A Comprehensive Review. J Imaging 2023; 9:84. [PMID: 37103235 PMCID: PMC10145952 DOI: 10.3390/jimaging9040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Millions of people are affected by retinal abnormalities worldwide. Early detection and treatment of these abnormalities could arrest further progression, saving multitudes from avoidable blindness. Manual disease detection is time-consuming, tedious and lacks repeatability. There have been efforts to automate ocular disease detection, riding on the successes of the application of Deep Convolutional Neural Networks (DCNNs) and vision transformers (ViTs) for Computer-Aided Diagnosis (CAD). These models have performed well, however, there remain challenges owing to the complex nature of retinal lesions. This work reviews the most common retinal pathologies, provides an overview of prevalent imaging modalities and presents a critical evaluation of current deep-learning research for the detection and grading of glaucoma, diabetic retinopathy, Age-Related Macular Degeneration and multiple retinal diseases. The work concluded that CAD, through deep learning, will increasingly be vital as an assistive technology. As future work, there is a need to explore the potential impact of using ensemble CNN architectures in multiclass, multilabel tasks. Efforts should also be expended on the improvement of model explainability to win the trust of clinicians and patients.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
| |
Collapse
|
18
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
19
|
Crincoli E, Zhao Z, Querques G, Sacconi R, Carlà MM, Giannuzzi F, Ferrara S, Ribarich N, L'Abbate G, Rizzo S, Souied EH, Miere A. Deep learning to distinguish Best vitelliform macular dystrophy (BVMD) from adult-onset vitelliform macular degeneration (AVMD). Sci Rep 2022; 12:12745. [PMID: 35882966 PMCID: PMC9325755 DOI: 10.1038/s41598-022-16980-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 07/19/2022] [Indexed: 11/17/2022] Open
Abstract
Initial stages of Best vitelliform macular dystrophy (BVMD) and adult vitelliform macular dystrophy (AVMD) harbor similar blue autofluorescence (BAF) and optical coherence tomography (OCT) features. Nevertheless, BVMD is characterized by a worse final stage visual acuity (VA) and an earlier onset of critical VA loss. Currently, differential diagnosis requires an invasive and time-consuming process including genetic testing, electrooculography (EOG), full field electroretinogram (ERG), and visual field testing. The aim of our study was to automatically classify OCT and BAF images from stage II BVMD and AVMD eyes using a deep learning algorithm and to identify an image processing method to facilitate human-based clinical diagnosis based on non-invasive tests like BAF and OCT without the use of machine-learning technology. After the application of a customized image processing method, OCT images were characterized by a dark appearance of the vitelliform deposit in the case of BVMD and a lighter inhomogeneous appearance in the case of AVMD. By contrast, a customized method for processing of BAF images revealed that BVMD and AVMD were characterized respectively by the presence or absence of a hypo-autofluorescent region of retina encircling the central hyperautofluorescent foveal lesion. The human-based evaluation of both BAF and OCT images showed significantly higher correspondence to ground truth reference when performed on processed images. The deep learning classifiers based on BAF and OCT images showed around 90% accuracy of classification with both processed and unprocessed images, which was significantly higher than human performance on both processed and unprocessed images. The ability to differentiate between the two entities without recurring to invasive and expensive tests may offer a valuable clinical tool in the management of the two diseases.
Collapse
Affiliation(s)
- Emanuele Crincoli
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, 40, avenue de Verdun, 94100, Créteil, France.,Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00166, Rome, Italy.,Catholic University of "Sacro Cuore", Largo Francesco Vito 1, 00166, Rome, Italy
| | - Zhanlin Zhao
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, 40, avenue de Verdun, 94100, Créteil, France
| | - Giuseppe Querques
- Department of Ophthalmology University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Matteo Maria Carlà
- Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00166, Rome, Italy.,Catholic University of "Sacro Cuore", Largo Francesco Vito 1, 00166, Rome, Italy
| | - Federico Giannuzzi
- Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00166, Rome, Italy.,Catholic University of "Sacro Cuore", Largo Francesco Vito 1, 00166, Rome, Italy
| | - Silvia Ferrara
- Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00166, Rome, Italy.,Catholic University of "Sacro Cuore", Largo Francesco Vito 1, 00166, Rome, Italy
| | - Nicolò Ribarich
- Department of Ophthalmology University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Gaia L'Abbate
- Department of Ophthalmology University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Stanislao Rizzo
- Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00166, Rome, Italy.,Catholic University of "Sacro Cuore", Largo Francesco Vito 1, 00166, Rome, Italy
| | - Eric H Souied
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, 40, avenue de Verdun, 94100, Créteil, France.,Ethics Committee of the Federation France Macula, 2018-27, 40 Av. de Verdun, 94010, Créteil, France
| | - Alexandra Miere
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, 40, avenue de Verdun, 94100, Créteil, France.
| |
Collapse
|
20
|
A Systematic Review of Artificial Intelligence Applications Used for Inherited Retinal Disease Management. Medicina (B Aires) 2022; 58:medicina58040504. [PMID: 35454342 PMCID: PMC9028098 DOI: 10.3390/medicina58040504] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 12/15/2022] Open
Abstract
Nowadays, Artificial Intelligence (AI) and its subfields, Machine Learning (ML) and Deep Learning (DL), are used for a variety of medical applications. It can help clinicians track the patient’s illness cycle, assist with diagnosis, and offer appropriate therapy alternatives. Each approach employed may address one or more AI problems, such as segmentation, prediction, recognition, classification, and regression. However, the amount of AI-featured research on Inherited Retinal Diseases (IRDs) is currently limited. Thus, this study aims to examine artificial intelligence approaches used in managing Inherited Retinal Disorders, from diagnosis to treatment. A total of 20,906 articles were identified using the Natural Language Processing (NLP) method from the IEEE Xplore, Springer, Elsevier, MDPI, and PubMed databases, and papers submitted from 2010 to 30 October 2021 are included in this systematic review. The resultant study demonstrates the AI approaches utilized on images from different IRD patient categories and the most utilized AI architectures and models with their imaging modalities, identifying the main benefits and challenges of using such methods.
Collapse
|
21
|
Deep Learning to Distinguish ABCA4-Related Stargardt Disease from PRPH2-Related Pseudo-Stargardt Pattern Dystrophy. J Clin Med 2021; 10:jcm10245742. [PMID: 34945039 PMCID: PMC8708395 DOI: 10.3390/jcm10245742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 11/17/2022] Open
Abstract
(1) Background: Recessive Stargardt disease (STGD1) and multifocal pattern dystrophy simulating Stargardt disease (“pseudo-Stargardt pattern dystrophy”, PSPD) share phenotypic similitudes, leading to a difficult clinical diagnosis. Our aim was to assess whether a deep learning classifier pretrained on fundus autofluorescence (FAF) images can assist in distinguishing ABCA4-related STGD1 from the PRPH2/RDS-related PSPD and to compare the performance with that of retinal specialists. (2) Methods: We trained a convolutional neural network (CNN) using 729 FAF images from normal patients or patients with inherited retinal diseases (IRDs). Transfer learning was then used to update the weights of a ResNet50V2 used to classify the 370 FAF images into STGD1 and PSPD. Retina specialists evaluated the same dataset. The performance of the CNN and that of retina specialists were compared in terms of accuracy, sensitivity, and precision. (3) Results: The CNN accuracy on the test dataset of 111 images was 0.882. The AUROC was 0.890, the precision was 0.883 and the sensitivity was 0.883. The accuracy for retina experts averaged 0.816, whereas for retina fellows it averaged 0.724. (4) Conclusions: This proof-of-concept study demonstrates that, even with small databases, a pretrained CNN is able to distinguish between STGD1 and PSPD with good accuracy.
Collapse
|
22
|
Daich Varela M, Esener B, Hashem SA, Cabral de Guimaraes TA, Georgiou M, Michaelides M. Structural evaluation in inherited retinal diseases. Br J Ophthalmol 2021; 105:1623-1631. [PMID: 33980508 PMCID: PMC8639906 DOI: 10.1136/bjophthalmol-2021-319228] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/07/2021] [Accepted: 04/21/2021] [Indexed: 12/20/2022]
Abstract
Ophthalmic genetics is a field that has been rapidly evolving over the last decade, mainly due to the flourishing of translational medicine for inherited retinal diseases (IRD). In this review, we will address the different methods by which retinal structure can be objectively and accurately assessed in IRD. We review standard-of-care imaging for these patients: colour fundus photography, fundus autofluorescence imaging and optical coherence tomography (OCT), as well as higher-resolution and/or newer technologies including OCT angiography, adaptive optics imaging, fundus imaging using a range of wavelengths, magnetic resonance imaging, laser speckle flowgraphy and retinal oximetry, illustrating their utility using paradigm genotypes with on-going therapeutic efforts/trials.
Collapse
Affiliation(s)
- Malena Daich Varela
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Burak Esener
- Department of Ophthalmology, Inonu University School of Medicine, Malatya, Turkey
| | - Shaima A Hashem
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | | | - Michalis Georgiou
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Michel Michaelides
- Moorfields Eye Hospital City Road Campus, London, UK
- UCL Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
23
|
Liu Y, Yuan X, Jiang X, Wang P, Kou J, Wang H, Liu M. Dilated Adversarial U-Net Network for automatic gross tumor volume segmentation of nasopharyngeal carcinoma. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107722] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Tan TE, Chan HW, Singh M, Wong TY, Pulido JS, Michaelides M, Sohn EH, Ting D. Artificial intelligence for diagnosis of inherited retinal disease: an exciting opportunity and one step forward. Br J Ophthalmol 2021; 105:1187-1189. [PMID: 34031045 DOI: 10.1136/bjophthalmol-2021-319365] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Tien-En Tan
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Programme (EYE ACP), Duke-NUS Medical School, Singapore
| | - Hwei Wuen Chan
- Department of Ophthalmology, National University of Singapore, Singapore
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Mandeep Singh
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, USA
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Programme (EYE ACP), Duke-NUS Medical School, Singapore
| | - Jose S Pulido
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Elliott H Sohn
- Department of Ophthalmology, University of Iowa, Iowa City, Iowa, USA
| | - Daniel Ting
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Programme (EYE ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
25
|
Pole C, Ameri H. Fundus Autofluorescence and Clinical Applications. J Ophthalmic Vis Res 2021; 16:432-461. [PMID: 34394872 PMCID: PMC8358768 DOI: 10.18502/jovr.v16i3.9439] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 05/01/2021] [Indexed: 12/20/2022] Open
Abstract
Fundus autofluorescence (FAF) has allowed in vivo mapping of retinal metabolic derangements and structural changes not possible with conventional color imaging. Incident light is absorbed by molecules in the fundus, which are excited and in turn emit photons of specific wavelengths that are captured and processed by a sensor to create a metabolic map of the fundus. Studies on the growing number of FAF platforms has shown each may be suited to certain clinical scenarios. Scanning laser ophthalmoscopes, fundus cameras, and modifications of these each have benefits and drawbacks that must be considered before and after imaging to properly interpret the images. Emerging clinical evidence has demonstrated the usefulness of FAF in diagnosis and management of an increasing number of chorioretinal conditions, such as age-related macular degeneration, central serous chorioretinopathy, retinal drug toxicities, and inherited retinal degenerations such as retinitis pigmentosa and Stargardt disease. This article reviews commercial imaging platforms, imaging techniques, and clinical applications of FAF.
Collapse
Affiliation(s)
- Cameron Pole
- Retina Division, USC Roski Eye Institute, Keck School of Medicine, University of South California, Los Angeles, CA, USA
| | - Hossein Ameri
- Retina Division, USC Roski Eye Institute, Keck School of Medicine, University of South California, Los Angeles, CA, USA
| |
Collapse
|
26
|
Fujinami-Yokokawa Y, Ninomiya H, Liu X, Yang L, Pontikos N, Yoshitake K, Iwata T, Sato Y, Hashimoto T, Tsunoda K, Miyata H, Fujinami K. Prediction of causative genes in inherited retinal disorder from fundus photography and autofluorescence imaging using deep learning techniques. Br J Ophthalmol 2021; 105:1272-1279. [PMID: 33879469 PMCID: PMC8380883 DOI: 10.1136/bjophthalmol-2020-318544] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/12/2021] [Accepted: 03/28/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND/AIMS To investigate the utility of a data-driven deep learning approach in patients with inherited retinal disorder (IRD) and to predict the causative genes based on fundus photography and fundus autofluorescence (FAF) imaging. METHODS Clinical and genetic data from 1302 subjects from 729 genetically confirmed families with IRD registered with the Japan Eye Genetics Consortium were reviewed. Three categories of genetic diagnosis were selected, based on the high prevalence of their causative genes: Stargardt disease (ABCA4), retinitis pigmentosa (EYS) and occult macular dystrophy (RP1L1). Fundus photographs and FAF images were cropped in a standardised manner with a macro algorithm. Images for training/testing were selected using a randomised, fourfold cross-validation method. The application program interface was established to reach the learning accuracy of concordance (target: >80%) between the genetic diagnosis and the machine diagnosis (ABCA4, EYS, RP1L1 and normal). RESULTS A total of 417 images from 156 Japanese subjects were examined, including 115 genetically confirmed patients caused by the three prevalent causative genes and 41 normal subjects. The mean overall test accuracy for fundus photographs and FAF images was 88.2% and 81.3%, respectively. The mean overall sensitivity/specificity values for fundus photographs and FAF images were 88.3%/97.4% and 81.8%/95.5%, respectively. CONCLUSION A novel application of deep neural networks in the prediction of the causative IRD genes from fundus photographs and FAF, with a high prediction accuracy of over 80%, was highlighted. These achievements will extensively promote the quality of medical care by facilitating early diagnosis, especially by non-specialists, access to care, reducing the cost of referrals, and preventing unnecessary clinical and genetic testing.
Collapse
Affiliation(s)
- Yu Fujinami-Yokokawa
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan.,Department of Health Policy and Management, School of Medicine, Keio University, Tokyo, Japan.,UCL Institute of Ophthalmology, UCL, London, UK.,Graduate School of Health Management, Keio University, Tokyo, Japan
| | - Hideki Ninomiya
- Department of Health Policy and Management, School of Medicine, Keio University, Tokyo, Japan
| | - Xiao Liu
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Lizhu Yang
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Nikolas Pontikos
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan.,UCL Institute of Ophthalmology, UCL, London, UK.,Division of Inherited Eye Disease, Medical Retina, Moorfields Eye Hostpial, London, UK
| | - Kazutoshi Yoshitake
- Division of Molecular and Cellular Biology, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Takeshi Iwata
- Division of Molecular and Cellular Biology, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Yasunori Sato
- Graduate School of Health Management, Keio University, Tokyo, Japan.,Department of Preventive Medicine and Public Health, Keio University School of Medicine, Tokyo, Japan
| | - Takeshi Hashimoto
- Graduate School of Health Management, Keio University, Tokyo, Japan.,Sports Medicine Research Center, Keio University, Tokyo, Japan
| | - Kazushige Tsunoda
- Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Hiroaki Miyata
- Department of Health Policy and Management, School of Medicine, Keio University, Tokyo, Japan.,Graduate School of Health Management, Keio University, Tokyo, Japan
| | - Kaoru Fujinami
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, Tokyo, Japan .,UCL Institute of Ophthalmology, UCL, London, UK.,Division of Inherited Eye Disease, Medical Retina, Moorfields Eye Hostpial, London, UK
| | | |
Collapse
|
27
|
Perepelkina T, Fulton AB. Artificial Intelligence (AI) Applications for Age-Related Macular Degeneration (AMD) and Other Retinal Dystrophies. Semin Ophthalmol 2021; 36:304-309. [PMID: 33764255 DOI: 10.1080/08820538.2021.1896756] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Artificial intelligence (AI), with its subdivisions (machine and deep learning), is a new branch of computer science that has shown impressive results across a variety of domains. The applications of AI to medicine and biology are being widely investigated. Medical specialties that rely heavily on images, including radiology, dermatology, oncology and ophthalmology, were the first to explore AI approaches in analysis and diagnosis. Applications of AI in ophthalmology have concentrated on diseases with high prevalence, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration (AMD), and glaucoma. Here we provide an overview of AI applications for diagnosis, classification, and clinical management of AMD and other macular dystrophies.
Collapse
Affiliation(s)
- Tatiana Perepelkina
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| | - Anne B Fulton
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|