1
|
Issa M, Sukkarieh G, Gallardo M, Sarbout I, Bonnin S, Tadayoni R, Milea D. Applications of artificial intelligence to inherited retinal diseases: A systematic review. Surv Ophthalmol 2024:S0039-6257(24)00139-5. [PMID: 39566565 DOI: 10.1016/j.survophthal.2024.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/07/2024] [Accepted: 11/13/2024] [Indexed: 11/22/2024]
Abstract
Artificial intelligence(AI)-based methods have been extensively used for the detection and management of various common retinal conditions, but their targeted development for inherited retinal diseases (IRD) is still nascent. In the context of limited availability of retinal subspecialists, genetic testing and genetic counseling, there is a high need for accurate and accessible diagnostic methods. The currently available AI studies, aiming for detection, classification, and prediction of IRD, remain mainly retrospective and include relatively limited numbers of patients due to their scarcity. We summarize the latest findings and clinical implications of machine-learning algorithms in IRD, highlighting the achievements and challenges of AI to assist ophthalmologists in their clinical practice.
Collapse
Affiliation(s)
| | | | | | - Ilias Sarbout
- Rothschild Foundation Hospital, Paris, France; Sorbonne University, France.
| | | | - Ramin Tadayoni
- Rothschild Foundation Hospital, Paris, France; Ophthalmology Department, Université Paris Cité, AP-HP, Hôpital Lariboisière, Paris, France
| | - Dan Milea
- Rothschild Foundation Hospital, Paris, France; Singapore Eye Research Institute, Singapore; Copenhagen University, Denmark; Angers University Hospital, Angers, France; Duke-NUS Medical School, Singapore.
| |
Collapse
|
2
|
Pennesi ME, Wang YZ, Birch DG. Deep learning aided measurement of outer retinal layer metrics as biomarkers for inherited retinal degenerations: opportunities and challenges. Curr Opin Ophthalmol 2024; 35:447-454. [PMID: 39259656 DOI: 10.1097/icu.0000000000001088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW The purpose of this review was to provide a summary of currently available retinal imaging and visual function testing methods for assessing inherited retinal degenerations (IRDs), with the emphasis on the application of deep learning (DL) approaches to assist the determination of structural biomarkers for IRDs. RECENT FINDINGS (clinical trials for IRDs; discover effective biomarkers as endpoints; DL applications in processing retinal images to detect disease-related structural changes). SUMMARY Assessing photoreceptor loss is a direct way to evaluate IRDs. Outer retinal layer structures, including outer nuclear layer, ellipsoid zone, photoreceptor outer segment, RPE, are potential structural biomarkers for IRDs. More work may be needed on structure and function relationship.
Collapse
Affiliation(s)
- Mark E Pennesi
- Retina Foundation of the Southwest, Dallas, Texas
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| | - David G Birch
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| |
Collapse
|
3
|
Zhang H, Zhang K, Wang J, Yu S, Li Z, Yin S, Zhu J, Wei W. Quickly diagnosing Bietti crystalline dystrophy with deep learning. iScience 2024; 27:110579. [PMID: 39220263 PMCID: PMC11365386 DOI: 10.1016/j.isci.2024.110579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/18/2024] [Accepted: 07/22/2024] [Indexed: 09/04/2024] Open
Abstract
Bietti crystalline dystrophy (BCD) is an autosomal recessive inherited retinal disease (IRD) and its early precise diagnosis is much challenging. This study aims to diagnose BCD and classify the clinical stage based on ultra-wide-field (UWF) color fundus photographs (CFPs) via deep learning (DL). All CFPs were labeled as BCD, retinitis pigmentosa (RP) or normal, and the BCD patients were further divided into three stages. DL models ResNeXt, Wide ResNet, and ResNeSt were developed, and model performance was evaluated using accuracy and confusion matrix. Then the diagnostic interpretability was verified by the heatmaps. The models achieved good classification results. Our study established the largest BCD database of Chinese population. We developed a quick diagnosing method for BCD and evaluated the potential efficacy of an automatic diagnosis and grading DL algorithm based on UWF fundus photography in a Chinese cohort of BCD patients.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- Chongqing Chang’an Industrial Group Co. Ltd, Chongqing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Shicheng Yu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou 510060, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shiyi Yin
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingyuan Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
4
|
Eckardt F, Mittas R, Horlava N, Schiefelbein J, Asani B, Michalakis S, Gerhardt M, Priglinger C, Keeser D, Koutsouleris N, Priglinger S, Theis F, Peng T, Schworm B. Deep Learning-Based Retinal Layer Segmentation in Optical Coherence Tomography Scans of Patients with Inherited Retinal Diseases. Klin Monbl Augenheilkd 2024. [PMID: 38086412 DOI: 10.1055/a-2227-3742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
BACKGROUND In optical coherence tomography (OCT) scans of patients with inherited retinal diseases (IRDs), the measurement of the thickness of the outer nuclear layer (ONL) has been well established as a surrogate marker for photoreceptor preservation. Current automatic segmentation tools fail in OCT segmentation in IRDs, and manual segmentation is time-consuming. METHODS AND MATERIAL Patients with IRD and an available OCT scan were screened for the present study. Additionally, OCT scans of patients without retinal disease were included to provide training data for artificial intelligence (AI). We trained a U-net-based model on healthy patients and applied a domain adaption technique to the IRD patients' scans. RESULTS We established an AI-based image segmentation algorithm that reliably segments the ONL in OCT scans of IRD patients. In a test dataset, the dice score of the algorithm was 98.7%. Furthermore, we generated thickness maps of the full retinal thickness and the ONL layer for each patient. CONCLUSION Accurate segmentation of anatomical layers on OCT scans plays a crucial role for predictive models linking retinal structure to visual function. Our algorithm for segmentation of OCT images could provide the basis for further studies on IRDs.
Collapse
Affiliation(s)
- Franziska Eckardt
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Robin Mittas
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Nastassya Horlava
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | | | - Ben Asani
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stylianos Michalakis
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maximilian Gerhardt
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claudia Priglinger
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Daniel Keeser
- Department of Psychiatry und Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - Nikolaos Koutsouleris
- Department of Psychiatry und Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - Siegfried Priglinger
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Fabian Theis
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Tingying Peng
- Institute for Computational Biology, Helmholtz Munich, Munich, Germany
| | - Benedikt Schworm
- Department of Ophthalmology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
5
|
Wang YZ, Juroch K, Birch DG. Deep Learning-Assisted Measurements of Photoreceptor Ellipsoid Zone Area and Outer Segment Volume as Biomarkers for Retinitis Pigmentosa. Bioengineering (Basel) 2023; 10:1394. [PMID: 38135984 PMCID: PMC10740805 DOI: 10.3390/bioengineering10121394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/13/2023] [Accepted: 11/29/2023] [Indexed: 12/24/2023] Open
Abstract
The manual segmentation of retinal layers from OCT scan images is time-consuming and costly. The deep learning approach has potential for the automatic delineation of retinal layers to significantly reduce the burden of human graders. In this study, we compared deep learning model (DLM) segmentation with manual correction (DLM-MC) to conventional manual grading (MG) for the measurements of the photoreceptor ellipsoid zone (EZ) area and outer segment (OS) volume in retinitis pigmentosa (RP) to assess whether DLM-MC can be a new gold standard for retinal layer segmentation and for the measurement of retinal layer metrics. Ninety-six high-speed 9 mm 31-line volume scans obtained from 48 patients with RPGR-associated XLRP were selected based on the following criteria: the presence of an EZ band within the scan limit and a detectable EZ in at least three B-scans in a volume scan. All the B-scan images in each volume scan were manually segmented for the EZ and proximal retinal pigment epithelium (pRPE) by two experienced human graders to serve as the ground truth for comparison. The test volume scans were also segmented by a DLM and then manually corrected for EZ and pRPE by the same two graders to obtain DLM-MC segmentation. The EZ area and OS volume were determined by interpolating the discrete two-dimensional B-scan EZ-pRPE layer over the scan area. Dice similarity, Bland-Altman analysis, correlation, and linear regression analyses were conducted to assess the agreement between DLM-MC and MG for the EZ area and OS volume measurements. For the EZ area, the overall mean dice score (SD) between DLM-MC and MG was 0.8524 (0.0821), which was comparable to 0.8417 (0.1111) between two MGs. For the EZ area > 1 mm2, the average dice score increased to 0.8799 (0.0614). When comparing DLM-MC to MG, the Bland-Altman plots revealed a mean difference (SE) of 0.0132 (0.0953) mm2 and a coefficient of repeatability (CoR) of 1.8303 mm2 for the EZ area and a mean difference (SE) of 0.0080 (0.0020) mm3 and a CoR of 0.0381 mm3 for the OS volume. The correlation coefficients (95% CI) were 0.9928 (0.9892-0.9952) and 0.9938 (0.9906-0.9958) for the EZ area and OS volume, respectively. The linear regression slopes (95% CI) were 0.9598 (0.9399-0.9797) and 1.0104 (0.9909-1.0298), respectively. The results from this study suggest that the manual correction of deep learning model segmentation can generate EZ area and OS volume measurements in excellent agreement with those of conventional manual grading in RP. Because DLM-MC is more efficient for retinal layer segmentation from OCT scan images, it has the potential to reduce the burden of human graders in obtaining quantitative measurements of biomarkers for assessing disease progression and treatment outcomes in RP.
Collapse
Affiliation(s)
- Yi-Zhong Wang
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| | - Katherine Juroch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
| | - David Geoffrey Birch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| |
Collapse
|
6
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
7
|
Liu TYA, Ling C, Hahn L, Jones CK, Boon CJ, Singh MS. Prediction of visual impairment in retinitis pigmentosa using deep learning and multimodal fundus images. Br J Ophthalmol 2023; 107:1484-1489. [PMID: 35896367 PMCID: PMC10579177 DOI: 10.1136/bjo-2021-320897] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 06/25/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The efficiency of clinical trials for retinitis pigmentosa (RP) treatment is limited by the screening burden and lack of reliable surrogate markers for functional end points. Automated methods to determine visual acuity (VA) may help address these challenges. We aimed to determine if VA could be estimated using confocal scanning laser ophthalmoscopy (cSLO) imaging and deep learning (DL). METHODS Snellen corrected VA and cSLO imaging were obtained retrospectively. The Johns Hopkins University (JHU) dataset was used for 10-fold cross-validations and internal testing. The Amsterdam University Medical Centers (AUMC) dataset was used for external independent testing. Both datasets had the same exclusion criteria: visually significant media opacities and images not centred on the central macula. The JHU dataset included patients with RP with and without molecular confirmation. The AUMC dataset only included molecularly confirmed patients with RP. Using transfer learning, three versions of the ResNet-152 neural network were trained: infrared (IR), optical coherence tomography (OCT) and combined image (CI). RESULTS In internal testing (JHU dataset, 2569 images, 462 eyes, 231 patients), the area under the curve (AUC) for the binary classification task of distinguishing between Snellen VA 20/40 or better and worse than Snellen VA 20/40 was 0.83, 0.87 and 0.85 for IR, OCT and CI, respectively. In external testing (AUMC dataset, 349 images, 166 eyes, 83 patients), the AUC was 0.78, 0.87 and 0.85 for IR, OCT and CI, respectively. CONCLUSIONS Our algorithm showed robust performance in predicting visual impairment in patients with RP, thus providing proof-of-concept for predicting structure-function correlation based solely on cSLO imaging in patients with RP.
Collapse
Affiliation(s)
- Tin Yan Alvin Liu
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Carlthan Ling
- Department of Ophthalmology, University of Maryland Medical System, Baltimore, Maryland, USA
| | - Leo Hahn
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
| | - Craig K Jones
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Camiel Jf Boon
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Mandeep S Singh
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
- Department of Genetic Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
8
|
Loo J, Jaffe GJ, Duncan JL, Birch DG, Farsiu S. VALIDATION OF A DEEP LEARNING-BASED ALGORITHM FOR SEGMENTATION OF THE ELLIPSOID ZONE ON OPTICAL COHERENCE TOMOGRAPHY IMAGES OF AN USH2A-RELATED RETINAL DEGENERATION CLINICAL TRIAL. Retina 2022; 42:1347-1355. [PMID: 35174801 PMCID: PMC9232868 DOI: 10.1097/iae.0000000000003448] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE To assess the generalizability of a deep learning-based algorithm to segment the ellipsoid zone (EZ). METHODS The dataset consisted of 127 spectral-domain optical coherence tomography volumes from eyes of participants with USH2A-related retinal degeneration enrolled in the RUSH2A clinical trial (NCT03146078). The EZ was segmented manually by trained readers and automatically by deep OCT atrophy detection, a deep learning-based algorithm originally developed for macular telangiectasia Type 2. Performance was evaluated using the Dice similarity coefficient between the segmentations, and the absolute difference and Pearson's correlation of measurements of interest obtained from the segmentations. RESULTS With deep OCT atrophy detection, the average (mean ± SD, median) Dice similarity coefficient was 0.79 ± 0.27, 0.90. The average absolute difference in total EZ area was 0.62 ± 1.41, 0.22 mm2 with a correlation of 0.97. The average absolute difference in the maximum EZ length was 222 ± 288, 126 µm with a correlation of 0.97. CONCLUSION Deep OCT atrophy detection segmented EZ in USH2A-related retinal degeneration with good performance. The algorithm is potentially generalizable to other diseases and other biomarkers of interest as well, which is an important aspect of clinical applicability.
Collapse
Affiliation(s)
- Jessica Loo
- Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Glenn J Jaffe
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Jacque L Duncan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California; and
| | | | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
9
|
The Natural History of CNGB1-Related Retinopathy: A Longitudinal Phenotypic Analysis. Int J Mol Sci 2022; 23:ijms23126785. [PMID: 35743231 PMCID: PMC9245601 DOI: 10.3390/ijms23126785] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/13/2022] [Accepted: 06/15/2022] [Indexed: 02/01/2023] Open
Abstract
Cyclic nucleotide-gated channel β 1 (CNGB1) encodes a subunit of the rod cyclic nucleotide-gated channel. Pathogenic variants in CNGB1 are responsible for 4% of autosomal recessive retinitis pigmentosa (RP). Several treatment strategies show promise for treating inherited retinal degenerations, however relevant metrics of progression and sensitive clinical trial endpoints are needed to assess therapeutic efficacy. This study reports the natural history of CNGB1-related RP with a longitudinal phenotypic analysis of 33 molecularly-confirmed patients with a mean follow-up period of 4.5 ± 3.9 years (range 0-17). The mean best corrected visual acuity (BCVA) of the right eye was 0.31 ± 0.43 logMAR at baseline and 0.47 ± 0.63 logMAR at the final visit over the study period. The ellipsoid zone (EZ) length was measurable in at least one eye of 23 patients and had a mean rate of constriction of 178 ± 161 µm per year (range 1.0-661 µm), with 57% of patients having a decrease in EZ length of greater than 250 µm in a simulated two-year trial period. Hyperautofluorescent outer ring (hyperAF) area was measurable in 17 patients, with 10 patients not displaying a ring phenotype. The results support previous findings of CNGB1-related RP being a slowly progressive disease with patients maintaining visual acuity. Prospective deep phenotyping studies assessing multimodal retinal imaging and functional measures are now required to determine clinical endpoints to be used in a trial.
Collapse
|
10
|
A Systematic Review of Artificial Intelligence Applications Used for Inherited Retinal Disease Management. Medicina (B Aires) 2022; 58:medicina58040504. [PMID: 35454342 PMCID: PMC9028098 DOI: 10.3390/medicina58040504] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 12/15/2022] Open
Abstract
Nowadays, Artificial Intelligence (AI) and its subfields, Machine Learning (ML) and Deep Learning (DL), are used for a variety of medical applications. It can help clinicians track the patient’s illness cycle, assist with diagnosis, and offer appropriate therapy alternatives. Each approach employed may address one or more AI problems, such as segmentation, prediction, recognition, classification, and regression. However, the amount of AI-featured research on Inherited Retinal Diseases (IRDs) is currently limited. Thus, this study aims to examine artificial intelligence approaches used in managing Inherited Retinal Disorders, from diagnosis to treatment. A total of 20,906 articles were identified using the Natural Language Processing (NLP) method from the IEEE Xplore, Springer, Elsevier, MDPI, and PubMed databases, and papers submitted from 2010 to 30 October 2021 are included in this systematic review. The resultant study demonstrates the AI approaches utilized on images from different IRD patient categories and the most utilized AI architectures and models with their imaging modalities, identifying the main benefits and challenges of using such methods.
Collapse
|
11
|
Su Z, Liang B, Shi F, Gelfond J, Šegalo S, Wang J, Jia P, Hao X. Deep learning-based facial image analysis in medical research: a systematic review protocol. BMJ Open 2021; 11:e047549. [PMID: 34764164 PMCID: PMC8587597 DOI: 10.1136/bmjopen-2020-047549] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 08/18/2021] [Indexed: 12/22/2022] Open
Abstract
INTRODUCTION Deep learning techniques are gaining momentum in medical research. Evidence shows that deep learning has advantages over humans in image identification and classification, such as facial image analysis in detecting people's medical conditions. While positive findings are available, little is known about the state-of-the-art of deep learning-based facial image analysis in the medical context. For the consideration of patients' welfare and the development of the practice, a timely understanding of the challenges and opportunities faced by research on deep-learning-based facial image analysis is needed. To address this gap, we aim to conduct a systematic review to identify the characteristics and effects of deep learning-based facial image analysis in medical research. Insights gained from this systematic review will provide a much-needed understanding of the characteristics, challenges, as well as opportunities in deep learning-based facial image analysis applied in the contexts of disease detection, diagnosis and prognosis. METHODS Databases including PubMed, PsycINFO, CINAHL, IEEEXplore and Scopus will be searched for relevant studies published in English in September, 2021. Titles, abstracts and full-text articles will be screened to identify eligible articles. A manual search of the reference lists of the included articles will also be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was adopted to guide the systematic review process. Two reviewers will independently examine the citations and select studies for inclusion. Discrepancies will be resolved by group discussions till a consensus is reached. Data will be extracted based on the research objective and selection criteria adopted in this study. ETHICS AND DISSEMINATION As the study is a protocol for a systematic review, ethical approval is not required. The study findings will be disseminated via peer-reviewed publications and conference presentations. PROSPERO REGISTRATION NUMBER CRD42020196473.
Collapse
Affiliation(s)
- Zhaohui Su
- Center on Smart and Connected Health Technologies, Mays Cancer Center, School of Nursing, UT Health San Antonio, San Antonio, Texas, USA
| | - Bin Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - J Gelfond
- Epidemiology and Biostatistics, University of Texas Health Science Center at San Antonio, San Antonio, Texas, UK
| | - Sabina Šegalo
- Department of Microbiology, University of Sarajevo, Sarajevo, Bosnia and Herzegovina
| | - Jing Wang
- College of Nursing, Florida State University, Tallahassee, Florida, USA
| | - Peng Jia
- Department of Land Surveying and Geo-Informatics, University of Twente, Enschede, Netherlands
- International Initiative on Spatial Lifecourse Epidemiology (ISLE), Enschede, UK
| | - Xiaoning Hao
- Division of Health Security Research, National Health Commission of the People's Republic of China, Beijing, Beijing, China
| |
Collapse
|
12
|
Hormel TT, Hwang TS, Bailey ST, Wilson DJ, Huang D, Jia Y. Artificial intelligence in OCT angiography. Prog Retin Eye Res 2021; 85:100965. [PMID: 33766775 PMCID: PMC8455727 DOI: 10.1016/j.preteyeres.2021.100965] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/09/2021] [Accepted: 03/15/2021] [Indexed: 12/21/2022]
Abstract
Optical coherence tomographic angiography (OCTA) is a non-invasive imaging modality that provides three-dimensional, information-rich vascular images. With numerous studies demonstrating unique capabilities in biomarker quantification, diagnosis, and monitoring, OCTA technology has seen rapid adoption in research and clinical settings. The value of OCTA imaging is significantly enhanced by image analysis tools that provide rapid and accurate quantification of vascular features and pathology. Today, the most powerful image analysis methods are based on artificial intelligence (AI). While AI encompasses a large variety of techniques, machine-learning-based, and especially deep-learning-based, image analysis provides accurate measurements in a variety of contexts, including different diseases and regions of the eye. Here, we discuss the principles of both OCTA and AI that make their combination capable of answering new questions. We also review contemporary applications of AI in OCTA, which include accurate detection of pathologies such as choroidal neovascularization, precise quantification of retinal perfusion, and reliable disease diagnosis.
Collapse
Affiliation(s)
- Tristan T Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA
| | - Steven T Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA
| | - David J Wilson
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, 97239, USA; Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, 97239, USA.
| |
Collapse
|
13
|
Chen TC, Lim WS, Wang VY, Ko ML, Chiu SI, Huang YS, Lai F, Yang CM, Hu FR, Jang JSR, Yang CH. Artificial Intelligence-Assisted Early Detection of Retinitis Pigmentosa - the Most Common Inherited Retinal Degeneration. J Digit Imaging 2021; 34:948-958. [PMID: 34244880 DOI: 10.1007/s10278-021-00479-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 06/02/2021] [Accepted: 06/21/2021] [Indexed: 12/01/2022] Open
Abstract
The purpose of this study was to detect the presence of retinitis pigmentosa (RP) based on color fundus photographs using a deep learning model. A total of 1670 color fundus photographs from the Taiwan inherited retinal degeneration project and National Taiwan University Hospital were acquired and preprocessed. The fundus photographs were labeled RP or normal and divided into training and validation datasets (n = 1284) and a test dataset (n = 386). Three transfer learning models based on pre-trained Inception V3, Inception Resnet V2, and Xception deep learning architectures, respectively, were developed to classify the presence of RP on fundus images. The model sensitivity, specificity, and area under the receiver operating characteristic (AUROC) curve were compared. The results from the best transfer learning model were compared with the reading results of two general ophthalmologists, one retinal specialist, and one specialist in retina and inherited retinal degenerations. A total of 935 RP and 324 normal images were used to train the models. The test dataset consisted of 193 RP and 193 normal images. Among the three transfer learning models evaluated, the Xception model had the best performance, achieving an AUROC of 96.74%. Gradient-weighted class activation mapping indicated that the contrast between the periphery and the macula on fundus photographs was an important feature in detecting RP. False-positive results were mostly obtained in cases of high myopia with highly tessellated retina, and false-negative results were mostly obtained in cases of unclear media, such as cataract, that led to a decrease in the contrast between the peripheral retina and the macula. Our model demonstrated the highest accuracy of 96.00%, which was comparable with the average results of 81.50%, of the other four ophthalmologists. Moreover, the accuracy was obtained at the same level of sensitivity (95.71%), as compared to an inherited retinal disease specialist. RP is an important disease, but its early and precise diagnosis is challenging. We developed and evaluated a transfer-learning-based model to detect RP from color fundus photographs. The results of this study validate the utility of deep learning in automating the identification of RP from fundus photographs.
Collapse
Affiliation(s)
- Ta-Ching Chen
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan.,Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Wee Shin Lim
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan
| | - Victoria Y Wang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Mei-Lan Ko
- Department of Ophthalmology, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Shu-I Chiu
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan
| | - Yu-Shu Huang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Feipei Lai
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Chung-May Yang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Fung-Rong Hu
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Jyh-Shing Roger Jang
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan.
| | - Chang-Hao Yang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
14
|
Arslan J, Samarasinghe G, Sowmya A, Benke KK, Hodgson LAB, Guymer RH, Baird PN. Deep Learning Applied to Automated Segmentation of Geographic Atrophy in Fundus Autofluorescence Images. Transl Vis Sci Technol 2021; 10:2. [PMID: 34228106 PMCID: PMC8267211 DOI: 10.1167/tvst.10.8.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 05/23/2021] [Indexed: 11/02/2022] Open
Abstract
Purpose This study describes the development of a deep learning algorithm based on the U-Net architecture for automated segmentation of geographic atrophy (GA) lesions in fundus autofluorescence (FAF) images. Methods Image preprocessing and normalization by modified adaptive histogram equalization were used for image standardization to improve effectiveness of deep learning. A U-Net-based deep learning algorithm was developed and trained and tested by fivefold cross-validation using FAF images from clinical datasets. The following metrics were used for evaluating the performance for lesion segmentation in GA: dice similarity coefficient (DSC), DSC loss, sensitivity, specificity, mean absolute error (MAE), accuracy, recall, and precision. Results In total, 702 FAF images from 51 patients were analyzed. After fivefold cross-validation for lesion segmentation, the average training and validation scores were found for the most important metric, DSC (0.9874 and 0.9779), for accuracy (0.9912 and 0.9815), for sensitivity (0.9955 and 0.9928), and for specificity (0.8686 and 0.7261). Scores for testing were all similar to the validation scores. The algorithm segmented GA lesions six times more quickly than human performance. Conclusions The deep learning algorithm can be implemented using clinical data with a very high level of performance for lesion segmentation. Automation of diagnostics for GA assessment has the potential to provide savings with respect to patient visit duration, operational cost and measurement reliability in routine GA assessments. Translational Relevance A deep learning algorithm based on the U-Net architecture and image preprocessing appears to be suitable for automated segmentation of GA lesions on clinical data, producing fast and accurate results.
Collapse
Affiliation(s)
- Janan Arslan
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Kurt K. Benke
- School of Engineering, University of Melbourne, Parkville, Victoria, Australia
- Centre for AgriBioscience, AgriBio, Bundoora, Victoria, Australia
| | - Lauren A. B. Hodgson
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
| | - Robyn H. Guymer
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
15
|
Abstract
PURPOSE To assess hyperreflective foci (HF) number and distribution in choroideremia (CHM) using spectral domain optical coherence tomography. METHODS Observational, cross-sectional case series. Consecutive patients and matched controls (20 eyes each) underwent best-corrected visual acuity measurement, fundoscopy, blue-light autofluorescence (BL-FAF) and spectral domain optical coherence tomography. Hyperreflective foci were assessed on a horizontal spectral domain optical coherence tomography scan, in the 500-µm area centered on the umbo, and in the 500-μm-wide areas internal (preserved border) and external (pathologic border) to the chorioretinal atrophy of CHM patients, and in the parafovea of controls. Hyperreflective foci were subclassified as retinal or choroidal. The spared central islet was measured on BL-FAF. Primary outcome was HF quantification in CHM. Secondary outcomes included their relationships with atrophy extent. RESULTS Choroideremia eyes disclosed a significantly higher HF number across the pathologic border and in the fovea when compared with controls; in particular, these HF were primarily located in the choroid (59-87%). Moreover, choroidal HF in the pathologic border inversely correlated with the area of the preserved central islet. CONCLUSION Hyperreflective foci might turn out to be a potential biomarker of CHM activity or severity. In this regard, we hypothesize that HF may be related to macrophages activation or progressive retinal pigment epithelium degeneration.
Collapse
|
16
|
Hagag AM, Mitsios A, Narayan A, Abbouda A, Webster AR, Dubis AM, Moosajee M. Prospective deep phenotyping of choroideremia patients using multimodal structure-function approaches. Eye (Lond) 2021; 35:838-852. [PMID: 32467628 PMCID: PMC8027673 DOI: 10.1038/s41433-020-0974-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Revised: 05/13/2020] [Accepted: 05/13/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To investigate the retinal changes in choroideremia (CHM) patients to determine correlations between age, structure and function. SUBJECTS/METHODS Twenty-six eyes from 13 male CHM patients were included in this prospective longitudinal study. Participants were divided into <50-year (n = 8) and ≥50-year (n = 5) old groups. Patients were seen at baseline, 6-month, and 1-year visits. Optical coherence tomography (OCT), OCT angiography, and fundus autofluorescence were performed to measure central foveal (CFT) and subfoveal choroidal thickness (SCT), as well as areas of preserved choriocapillaris (CC), ellipsoid zone (EZ), and autofluorescence (PAF). Patients also underwent functional investigations including visual acuity (VA), contrast sensitivity (CS), colour testing, microperimetry, dark adaptometry, and handheld electroretinogram (ERG). Vision-related quality-of-life was assessed by using the NEI-VFQ-25 questionnaire. RESULTS Over the 1-year follow-up period, progressive loss was detected in SCT, EZ, CC, PAF, and CFT. Those ≥50-years exhibited more structural and functional defects with SCT, EZ, CC, and PAF showing strong correlation with patient age (rho ≤ -0.47, p ≤ 0.02). CS and VA did not change over the year, but CS was significantly correlated with age (rho = -0.63, p = 0.001). Delayed to unmeasurable dark adaptation, decreased colour discrimination and no detectable ERG activity were observed in all patients. Minimal functional deterioration was observed over one year with a general trend of slower progression in the ≥50-years group. CONCLUSIONS Quantitative structural parameters including SCT, CC, EZ, and PAF are most useful for disease monitoring in CHM. Extended follow-up studies are required to determine longitudinal functional changes.
Collapse
Affiliation(s)
- Ahmed M Hagag
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- UCL Institute of Ophthalmology, London, UK
| | - Andreas Mitsios
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- UCL Institute of Ophthalmology, London, UK
| | | | - Alessandro Abbouda
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- UCL Institute of Ophthalmology, London, UK
| | - Andrew R Webster
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- UCL Institute of Ophthalmology, London, UK
| | - Adam M Dubis
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- UCL Institute of Ophthalmology, London, UK
| | - Mariya Moosajee
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- UCL Institute of Ophthalmology, London, UK.
- Department of Ophthalmology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK.
| |
Collapse
|
17
|
Borkovkina S, Camino A, Janpongsri W, Sarunic MV, Jian Y. Real-time retinal layer segmentation of OCT volumes with GPU accelerated inferencing using a compressed, low-latency neural network. BIOMEDICAL OPTICS EXPRESS 2020; 11:3968-3984. [PMID: 33014579 PMCID: PMC7510892 DOI: 10.1364/boe.395279] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/18/2020] [Accepted: 06/18/2020] [Indexed: 05/18/2023]
Abstract
Segmentation of retinal layers in optical coherence tomography (OCT) is an essential step in OCT image analysis for screening, diagnosis, and assessment of retinal disease progression. Real-time segmentation together with high-speed OCT volume acquisition allows rendering of en face OCT of arbitrary retinal layers, which can be used to increase the yield rate of high-quality scans, provide real-time feedback during image-guided surgeries, and compensate aberrations in adaptive optics (AO) OCT without using wavefront sensors. We demonstrate here unprecedented real-time OCT segmentation of eight retinal layer boundaries achieved by 3 levels of optimization: 1) a modified, low complexity, neural network structure, 2) an innovative scheme of neural network compression with TensorRT, and 3) specialized GPU hardware to accelerate computation. Inferencing with the compressed network U-NetRT took 3.5 ms, improving by 21 times the speed of conventional U-Net inference without reducing the accuracy. The latency of the entire pipeline from data acquisition to inferencing was only 41 ms, enabled by parallelized batch processing. The system and method allow real-time updating of en face OCT and OCTA visualizations of arbitrary retinal layers and plexuses in continuous mode scanning. To the best our knowledge, our work is the first demonstration of an ophthalmic imager with embedded artificial intelligence (AI) providing real-time feedback.
Collapse
Affiliation(s)
| | - Acner Camino
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 27239, USA
| | - Worawee Janpongsri
- Department of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Marinko V. Sarunic
- Department of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 27239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
18
|
Murro V, Mucciolo DP, Sodi A, Giorgio D, Passerini I, Pelo E, Virgili G, Rizzo S. En face OCT in choroideremia. Ophthalmic Genet 2020; 40:514-520. [PMID: 31928275 DOI: 10.1080/13816810.2019.1711429] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Purpose: To describe the outer retinal tubulation (ORT) morphology using En face OCT elaboration in a large group of patients affected by choroideremia (CHM).Material and Methods: We retrospectively reviewed CHM patients examined at the Regional Reference Center for Hereditary Retinal Degenerations at the Eye Clinic in Florence. We took into consideration genetically confirmed CHM patients with ophthalmological, fundus autofluorescence (FAF) and optical coherence tomography (OCT) examinations.Results: We studied en face OCT features of ORTs in 18 CHM patients, for a total of 36 eyes; (average age 33 years; SD 19,2; range 13-77 years). ORTs were found in 30 eyes of 15 patients (15/18; 83,3% of the patients). We identified 3 en face OCT patterns: round lesions with scalloped boundaries which involved the peripapillary area with more or less evident pseudodendritic ORTs (PD-ORT) (pattern p; 26,7%); central islands with PD-ORTs (pattern i; 53,3%); residual outer retinal areas with no ORTs (pattern r; 20,0%).Conclusions: In CHM, en face OCT imaging allows us to observe various morphological features of the ORTs in different stages of disease, not detectable with other imaging techniques. ORTs were not identified in the mildest phenotypes. En face OCT is a non-invasive useful tool in the characterization and monitoring of the disease.
Collapse
Affiliation(s)
- Vittoria Murro
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| | - Dario Pasquale Mucciolo
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| | - Andrea Sodi
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| | - Dario Giorgio
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| | - Ilaria Passerini
- Department of Genetic Diagnosis, Careggi Teaching Hospital, Florence, Italy
| | - Elisabetta Pelo
- Department of Genetic Diagnosis, Careggi Teaching Hospital, Florence, Italy
| | - Gianni Virgili
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| | - Stanislao Rizzo
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
| |
Collapse
|
19
|
Beyond Performance Metrics: Automatic Deep Learning Retinal OCT Analysis Reproduces Clinical Trial Outcome. Ophthalmology 2019; 127:793-801. [PMID: 32019699 DOI: 10.1016/j.ophtha.2019.12.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/10/2019] [Accepted: 12/17/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To validate the efficacy of a fully automatic, deep learning-based segmentation algorithm beyond conventional performance metrics by measuring the primary outcome of a clinical trial for macular telangiectasia type 2 (MacTel2). DESIGN Evaluation of diagnostic test or technology. PARTICIPANTS A total of 92 eyes from 62 participants with MacTel2 from a phase 2 clinical trial (NCT01949324) randomized to 1 of 2 treatment groups METHODS: The ellipsoid zone (EZ) defect areas were measured on spectral domain OCT images of each eye at 2 time points (baseline and month 24) by a fully automatic, deep learning-based segmentation algorithm. The change in EZ defect area from baseline to month 24 was calculated and analyzed according to the clinical trial protocol. MAIN OUTCOME MEASURE Difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups. RESULTS The difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups measured by the fully automatic segmentation algorithm was 0.072±0.035 mm2 (P = 0.021). This was comparable to the outcome of the clinical trial using semiautomatic measurements by expert readers, 0.065±0.033 mm2 (P = 0.025). CONCLUSIONS The fully automatic segmentation algorithm was as accurate as semiautomatic expert segmentation to assess EZ defect areas and was able to reliably reproduce the statistically significant primary outcome measure of the clinical trial. This approach, to validate the performance of an automatic segmentation algorithm on the primary clinical trial end point, provides a robust gauge of its clinical applicability.
Collapse
|
20
|
Zang P, Wang J, Hormel TT, Liu L, Huang D, Jia Y. Automated segmentation of peripapillary retinal boundaries in OCT combining a convolutional neural network and a multi-weights graph search. BIOMEDICAL OPTICS EXPRESS 2019; 10:4340-4352. [PMID: 31453015 PMCID: PMC6701529 DOI: 10.1364/boe.10.004340] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Revised: 07/05/2019] [Accepted: 07/10/2019] [Indexed: 05/16/2023]
Abstract
Quantitative analysis of the peripapillary retinal layers and capillary plexuses from optical coherence tomography (OCT) and OCT angiography images depend on two segmentation tasks - delineating the boundary of the optic disc and delineating the boundaries between retinal layers. Here, we present a method combining a neural network and graph search to perform these two tasks. A comparison of this novel method's segmentation of the disc boundary showed good agreement with the ground truth, achieving an overall Dice similarity coefficient of 0.91 ± 0.04 in healthy and glaucomatous eyes. The absolute error of retinal layer boundaries segmentation in the same cases was 4.10 ± 1.25 µm.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Liang Liu
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
21
|
Mitsios A, Dubis AM, Moosajee M. Choroideremia: from genetic and clinical phenotyping to gene therapy and future treatments. Ther Adv Ophthalmol 2018; 10:2515841418817490. [PMID: 30627697 PMCID: PMC6311551 DOI: 10.1177/2515841418817490] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Accepted: 11/05/2018] [Indexed: 11/15/2022] Open
Abstract
Choroideremia is an X-linked inherited chorioretinal dystrophy leading to blindness by late adulthood. Choroideremia is caused by mutations in the CHM gene which encodes Rab escort protein 1 (REP1), an ubiquitously expressed protein involved in intracellular trafficking and prenylation activity. The exact site of pathogenesis remains unclear but results in degeneration of the photoreceptors, retinal pigment epithelium and choroid. Animal and stem cell models have been used to study the molecular defects in choroideremia and test effectiveness of treatment interventions. Natural history studies of choroideremia have provided additional insight into the clinical phenotype of the condition and prepared the way for clinical trials aiming to investigate the safety and efficacy of suitable therapies. In this review, we provide a summary of the current knowledge on the genetics, pathophysiology, clinical features and therapeutic strategies that might become available for choroideremia in the future, including gene therapy, stem cell treatment and small-molecule drugs with nonsense suppression action.
Collapse
Affiliation(s)
- Andreas Mitsios
- Institute of Ophthalmology, University College London, London, UK
| | - Adam M Dubis
- Institute of Ophthalmology, University College London, London, UK
| | - Mariya Moosajee
- Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
22
|
Guo Y, Camino A, Wang J, Huang D, Hwang TS, Jia Y. MEDnet, a neural network for automated detection of avascular area in OCT angiography. BIOMEDICAL OPTICS EXPRESS 2018; 9:5147-5158. [PMID: 30460119 PMCID: PMC6238913 DOI: 10.1364/boe.9.005147] [Citation(s) in RCA: 59] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 09/19/2018] [Accepted: 09/21/2018] [Indexed: 05/05/2023]
Abstract
Screening and assessing diabetic retinopathy (DR) are essential for reducing morbidity associated with diabetes. Macular ischemia is known to correlate with the severity of retinopathy. Recent studies have shown that optical coherence tomography angiography (OCTA), with intrinsic contrast from blood flow motion, is well suited for quantified analysis of the avascular area, which is potentially a useful biomarker in DR. In this study, we propose the first deep learning solution to segment the avascular area in OCTA of DR. The network design consists of a multi-scaled encoder-decoder neural network (MEDnet) to detect the non-perfusion area in 6 × 6 mm2 and in ultra-wide field retinal angiograms. Avascular areas were effectively detected in DR subjects of various disease stages as well as in the foveal avascular zone of healthy subjects.
Collapse
Affiliation(s)
- Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Acner Camino
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|