1
|
Pennesi ME, Wang YZ, Birch DG. Deep learning aided measurement of outer retinal layer metrics as biomarkers for inherited retinal degenerations: opportunities and challenges. Curr Opin Ophthalmol 2024; 35:447-454. [PMID: 39259656 DOI: 10.1097/icu.0000000000001088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW The purpose of this review was to provide a summary of currently available retinal imaging and visual function testing methods for assessing inherited retinal degenerations (IRDs), with the emphasis on the application of deep learning (DL) approaches to assist the determination of structural biomarkers for IRDs. RECENT FINDINGS (clinical trials for IRDs; discover effective biomarkers as endpoints; DL applications in processing retinal images to detect disease-related structural changes). SUMMARY Assessing photoreceptor loss is a direct way to evaluate IRDs. Outer retinal layer structures, including outer nuclear layer, ellipsoid zone, photoreceptor outer segment, RPE, are potential structural biomarkers for IRDs. More work may be needed on structure and function relationship.
Collapse
Affiliation(s)
- Mark E Pennesi
- Retina Foundation of the Southwest, Dallas, Texas
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| | - David G Birch
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| |
Collapse
|
2
|
Mishra Z, Wang Z, Xu E, Xu S, Majid I, Sadda SR, Hu ZJ. Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.02.11.24302670. [PMID: 38405807 PMCID: PMC10888984 DOI: 10.1101/2024.02.11.24302670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Stargardt disease and age-related macular degeneration are the leading causes of blindness in the juvenile and geriatric populations, respectively. The formation of atrophic regions of the macula is a hallmark of the end-stages of both diseases. The progression of these diseases is tracked using various imaging modalities, two of the most common being fundus autofluorescence (FAF) imaging and spectral-domain optical coherence tomography (SD-OCT). This study seeks to investigate the use of longitudinal FAF and SD-OCT imaging (month 0, month 6, month 12, and month 18) data for the predictive modelling of future atrophy in Stargardt and geographic atrophy. To achieve such an objective, we develop a set of novel deep convolutional neural networks enhanced with recurrent network units for longitudinal prediction and concurrent learning of ensemble network units (termed ReConNet) which take advantage of improved retinal layer features beyond the mean intensity features. Using FAF images, the neural network presented in this paper achieved mean (± standard deviation, SD) and median Dice coefficients of 0.895 (± 0.086) and 0.922 for Stargardt atrophy, and 0.864 (± 0.113) and 0.893 for geographic atrophy. Using SD-OCT images for Stargardt atrophy, the neural network achieved mean and median Dice coefficients of 0.882 (± 0.101) and 0.906, respectively. When predicting only the interval growth of the atrophic lesions with FAF images, mean (± SD) and median Dice coefficients of 0.557 (± 0.094) and 0.559 were achieved for Stargardt atrophy, and 0.612 (± 0.089) and 0.601 for geographic atrophy. The prediction performance in OCT images is comparably good to that using FAF which opens a new, more efficient, and practical door in the assessment of atrophy progression for clinical trials and retina clinics, beyond widely used FAF. These results are highly encouraging for a high-performance interval growth prediction when more frequent or longer-term longitudinal data are available in our clinics. This is a pressing task for our next step in ongoing research.
Collapse
Affiliation(s)
- Zubin Mishra
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| | - Ziyuan Wang
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Emily Xu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - Sophia Xu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - Iyad Majid
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - SriniVas R. Sadda
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Zhihong Jewel Hu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| |
Collapse
|
3
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
4
|
Schmetterer L, Scholl H, Garhöfer G, Janeschitz-Kriegl L, Corvi F, Sadda SR, Medeiros FA. Endpoints for clinical trials in ophthalmology. Prog Retin Eye Res 2023; 97:101160. [PMID: 36599784 DOI: 10.1016/j.preteyeres.2022.101160] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 12/22/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023]
Abstract
With the identification of novel targets, the number of interventional clinical trials in ophthalmology has increased. Visual acuity has for a long time been considered the gold standard endpoint for clinical trials, but in the recent years it became evident that other endpoints are required for many indications including geographic atrophy and inherited retinal disease. In glaucoma the currently available drugs were approved based on their IOP lowering capacity. Some recent findings do, however, indicate that at the same level of IOP reduction, not all drugs have the same effect on visual field progression. For neuroprotection trials in glaucoma, novel surrogate endpoints are required, which may either include functional or structural parameters or a combination of both. A number of potential surrogate endpoints for ophthalmology clinical trials have been identified, but their validation is complicated and requires solid scientific evidence. In this article we summarize candidates for clinical endpoints in ophthalmology with a focus on retinal disease and glaucoma. Functional and structural biomarkers, as well as quality of life measures are discussed, and their potential to serve as endpoints in pivotal trials is critically evaluated.
Collapse
Affiliation(s)
- Leopold Schmetterer
- Singapore Eye Research Institute, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore; Academic Clinical Program, Duke-NUS Medical School, Singapore; School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore; Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| | - Hendrik Scholl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Federico Corvi
- Eye Clinic, Department of Biomedical and Clinical Sciences "Luigi Sacco", University of Milan, Italy
| | - SriniVas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA; Department of Ophthalmology, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory, Department of Ophthalmology, Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
5
|
Mishra Z, Wang Z, Sadda SR, Hu Z. Using Ensemble OCT-Derived Features beyond Intensity Features for Enhanced Stargardt Atrophy Prediction with Deep Learning. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:8555. [PMID: 39086558 PMCID: PMC11288976 DOI: 10.3390/app13148555] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Stargardt disease is the most common form of juvenile-onset macular dystrophy. Spectral-domain optical coherence tomography (SD-OCT) imaging provides an opportunity to directly measure changes to retinal layers due to Stargardt atrophy. Generally, atrophy segmentation and prediction can be conducted using mean intensity feature maps generated from the relevant retinal layers. In this paper, we report an approach using advanced OCT-derived features to augment and enhance data beyond the commonly used mean intensity features for enhanced prediction of Stargardt atrophy with an ensemble deep learning neural network. With all the relevant retinal layers, this neural network architecture achieves a median Dice coefficient of 0.830 for six-month predictions and 0.828 for twelve-month predictions, showing a significant improvement over a neural network using only mean intensity, which achieved Dice coefficients of 0.744 and 0.762 for six-month and twelve-month predictions, respectively. When using feature maps generated from different layers of the retina, significant differences in performance were observed. This study shows promising results for using multiple OCT-derived features beyond intensity for assessing the prognosis of Stargardt disease and quantifying the rate of progression.
Collapse
Affiliation(s)
- Zubin Mishra
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA 91103, USA
- School of Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Ziyuan Wang
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA 91103, USA
- Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA
| | - SriniVas R. Sadda
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA 91103, USA
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Zhihong Hu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA 91103, USA
| |
Collapse
|
6
|
Wang X, Li R, Chen J, Han D, Wang M, Xiong H, Ding W, Zheng Y, Xiong K, Zeng Y. Choroidal vascularity index (CVI)-Net-based automatic assessment of diabetic retinopathy severity using CVI in optical coherence tomography images. JOURNAL OF BIOPHOTONICS 2023; 16:e202200370. [PMID: 36633529 DOI: 10.1002/jbio.202200370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 01/02/2023] [Accepted: 01/09/2023] [Indexed: 06/07/2023]
Abstract
A deep learning model called choroidal vascularity index (CVI)-Net is proposed to automatically segment the choroid layer and its vessels in overall optical coherence tomography (OCT) scans. Clinical parameters are then automatically quantified to determine structural and vascular changes in the choroid with the progression of diabetic retinopathy (DR) severity. The study includes 65 eyes consisting of 34 with proliferative DR (PDR), 17 with nonproliferative DR (NPDR), and 14 healthy controls from two OCT systems. On a dataset of 396 OCT B-scan images with manually annotated ground truths, overall Dice coefficients of 96.6 ± 1.5 and 89.1 ± 3.1 are obtained by CVI-Net for the choroid layer and vessel segmentation, respectively. The mean CVI values among the normal, NPDR, and PDR groups are consistent with reported outcomes. Statistical results indicate that CVI shows a significant negative correlation with DR severity level, and this correlation is independent of changes in other physiological parameters.
Collapse
Affiliation(s)
- Xuehua Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Rui Li
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Junyan Chen
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Dingan Han
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Mingyi Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Honglian Xiong
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Wenzheng Ding
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Yixu Zheng
- Department of Ophthalmology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Ke Xiong
- Department of Ophthalmology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yaguang Zeng
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| |
Collapse
|
7
|
Nagasato D, Sogawa T, Tanabe M, Tabuchi H, Numa S, Oishi A, Ohashi Ikeda H, Tsujikawa A, Maeda T, Takahashi M, Ito N, Miura G, Shinohara T, Egawa M, Mitamura Y. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol 2023; 141:305-313. [PMID: 36821134 PMCID: PMC9951103 DOI: 10.1001/jamaophthalmol.2022.6393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Importance There is no widespread effective treatment to halt the progression of retinitis pigmentosa. Consequently, adequate assessment and estimation of residual visual function are important clinically. Objective To examine whether deep learning can accurately estimate the visual function of patients with retinitis pigmentosa by using ultra-widefield fundus images obtained on concurrent visits. Design, Setting, and Participants Data for this multicenter, retrospective, cross-sectional study were collected between January 1, 2012, and December 31, 2018. This study included 695 consecutive patients with retinitis pigmentosa who were examined at 5 institutions. Each of the 3 types of input images-ultra-widefield pseudocolor images, ultra-widefield fundus autofluorescence images, and both ultra-widefield pseudocolor and fundus autofluorescence images-was paired with 1 of the 31 types of ensemble models constructed from 5 deep learning models (Visual Geometry Group-16, Residual Network-50, InceptionV3, DenseNet121, and EfficientNetB0). We used 848, 212, and 214 images for the training, validation, and testing data, respectively. All data from 1 institution were used for the independent testing data. Data analysis was performed from June 7, 2021, to December 5, 2022. Main Outcomes and Measures The mean deviation on the Humphrey field analyzer, central retinal sensitivity, and best-corrected visual acuity were estimated. The image type-ensemble model combination that yielded the smallest mean absolute error was defined as the model with the best estimation accuracy. After removal of the bias of including both eyes with the generalized linear mixed model, correlations between the actual values of the testing data and the estimated values by the best accuracy model were examined by calculating standardized regression coefficients and P values. Results The study included 1274 eyes of 695 patients. A total of 385 patients were female (55.4%), and the mean (SD) age was 53.9 (17.2) years. Among the 3 types of images, the model using ultra-widefield fundus autofluorescence images alone provided the best estimation accuracy for mean deviation, central sensitivity, and visual acuity. Standardized regression coefficients were 0.684 (95% CI, 0.567-0.802) for the mean deviation estimation, 0.697 (95% CI, 0.590-0.804) for the central sensitivity estimation, and 0.309 (95% CI, 0.187-0.430) for the visual acuity estimation (all P < .001). Conclusions and Relevance Results of this study suggest that the visual function estimation in patients with retinitis pigmentosa from ultra-widefield fundus autofluorescence images using deep learning might help assess disease progression objectively. Findings also suggest that deep learning models might monitor the progression of retinitis pigmentosa efficiently during follow-up.
Collapse
Affiliation(s)
- Daisuke Nagasato
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Takahiro Sogawa
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Shogo Numa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akio Oishi
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan,Department of Ophthalmology and Visual Sciences, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Hanako Ohashi Ikeda
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akitaka Tsujikawa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Tadao Maeda
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan
| | - Masayo Takahashi
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan,Vision Care Inc, Kobe, Japan
| | - Nana Ito
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Gen Miura
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Terumi Shinohara
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Mariko Egawa
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
8
|
Heath Jeffery RC, Thompson JA, Lamey TM, McLaren TL, De Roach JN, McAllister IL, Constable IJ, Chen FK. Longitudinal Analysis of Functional and Structural Outcome Measures in PRPH2-Associated Retinal Dystrophy. Ophthalmol Retina 2023; 7:81-91. [PMID: 35792359 DOI: 10.1016/j.oret.2022.06.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 06/12/2022] [Accepted: 06/27/2022] [Indexed: 01/28/2023]
Abstract
PURPOSE To establish disease progression rates in total lesion size (TLS), decreased autofluorescence (DAF) area, total macular volume (TMV), and mean macular sensitivity (MMS) in PRPH2-associated retinal dystrophy. DESIGN Single-center, retrospective chart review. PARTICIPANTS Patients with heterozygous pathogenic or likely pathogenic PRPH2 variants. METHODS Patients who underwent serial ultrawide-field (UWF) fundus autofluorescence (FAF), OCT, and Macular Integrity Assessment microperimetry with at least 1 year of follow-up were included. Linear correlation was performed in eyes of all patients to determine the rate of change over time. MAIN OUTCOME MEASURES Outcome measures included changes in TLS, DAF area, TMV, and MMS. RESULTS Twelve patients (mean age, 55) from 10 unrelated families attended 100 clinic visits, which spanned over a mean (SD) of 4.7 (2.0) years. Mean (SD) TLS and DAF radius expansion were 0.14 (0.12) and 0.10 (0.08) mm/year, respectively. Mean (SD) TMV change was -0.071 (0.040) mm3/year with no interocular difference (P = 0.20) and strong interocular correlation (r2 = 0.88, P < 0.01). Mean (SD) MMS change was -0.10 (1.25) dB/year. Mean macular sensitivity declined in 4 and improved in 6 patients. Mean macular sensitivity was subnormal despite a TMV within the normal range. CONCLUSIONS Serial measurements of UWF-FAF-derived TLS and DAF showed slow expansion. Total macular volume might be a more sensitive measure than MMS in detecting disease progression.
Collapse
Affiliation(s)
- Rachael C Heath Jeffery
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Jennifer A Thompson
- Department of Medical Technology and Physics, Australian Inherited Retinal Disease Registry and DNA Bank, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Tina M Lamey
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia; Department of Medical Technology and Physics, Australian Inherited Retinal Disease Registry and DNA Bank, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Terri L McLaren
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia; Department of Medical Technology and Physics, Australian Inherited Retinal Disease Registry and DNA Bank, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - John N De Roach
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia; Department of Medical Technology and Physics, Australian Inherited Retinal Disease Registry and DNA Bank, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Ian L McAllister
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia
| | - Ian J Constable
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Australia; Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia; Department of Medical Technology and Physics, Australian Inherited Retinal Disease Registry and DNA Bank, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia; Department of Ophthalmology, University of Melbourne, East Melbourne, Victoria, Australia.
| |
Collapse
|
9
|
Parra-Mora E, da Silva Cruz LA. LOCTseg: A lightweight fully convolutional network for end-to-end optical coherence tomography segmentation. Comput Biol Med 2022; 150:106174. [PMID: 36252364 DOI: 10.1016/j.compbiomed.2022.106174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/31/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
This article presents a novel end-to-end automatic solution for semantic segmentation of optical coherence tomography (OCT) images. OCT is a non-invasive imaging technology widely used in clinical practice due to its ability to acquire high-resolution cross-sectional images of the ocular fundus. Due to the large variability of the retinal structures, OCT segmentation is usually carried out manually and requires expert knowledge. This study introduces a novel fully convolutional network (FCN) architecture designated by LOCTSeg, for end-to-end automatic segmentation of diagnostic markers in OCT b-scans. LOCTSeg is a lightweight deep FCN optimized for balancing performance and efficiency. Unlike state-of-the-art FCNs used in image segmentation, LOCTSeg achieves competitive inference speed without sacrificing segmentation accuracy. The proposed LOCTSeg is evaluated on two publicly available benchmarking datasets: (1) annotated retinal OCT image database (AROI) comprising 1136 images, and (2) healthy controls and multiple sclerosis lesions (HCMS) consisting of 1715 images. Moreover, we evaluated the proposed LOCTSeg with a private dataset of 250 OCT b-scans acquired from epiretinal membrane (ERM) and healthy patients. Results of the evaluation demonstrate empirically the effectiveness of the proposed algorithm, which improves the state-of-the-art Dice score from 69% to 73% and from 91% to 92% on AROI and HCMS datasets, respectively. Furthermore, LOCTSeg outperforms comparable lightweight FCNs' Dice score by margins between 4% and 15% on ERM segmentation.
Collapse
Affiliation(s)
- Esther Parra-Mora
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| | - Luís A da Silva Cruz
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| |
Collapse
|
10
|
|
11
|
A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation. Sci Rep 2022; 12:14888. [PMID: 36050364 PMCID: PMC9437058 DOI: 10.1038/s41598-022-18646-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/17/2022] [Indexed: 11/08/2022] Open
Abstract
Deep learning methods have enabled a fast, accurate and automated approach for retinal layer segmentation in posterior segment OCT images. Due to the success of semantic segmentation methods adopting the U-Net, a wide range of variants and improvements have been developed and applied to OCT segmentation. Unfortunately, the relative performance of these methods is difficult to ascertain for OCT retinal layer segmentation due to a lack of comprehensive comparative studies, and a lack of proper matching between networks in previous comparisons, as well as the use of different OCT datasets between studies. In this paper, a detailed and unbiased comparison is performed between eight U-Net architecture variants across four different OCT datasets from a range of different populations, ocular pathologies, acquisition parameters, instruments and segmentation tasks. The U-Net architecture variants evaluated include some which have not been previously explored for OCT segmentation. Using the Dice coefficient to evaluate segmentation performance, minimal differences were noted between most of the tested architectures across the four datasets. Using an extra convolutional layer per pooling block gave a small improvement in segmentation performance for all architectures across all four datasets. This finding highlights the importance of careful architecture comparison (e.g. ensuring networks are matched using an equivalent number of layers) to obtain a true and unbiased performance assessment of fully semantic models. Overall, this study demonstrates that the vanilla U-Net is sufficient for OCT retinal layer segmentation and that state-of-the-art methods and other architectural changes are potentially unnecessary for this particular task, especially given the associated increased complexity and slower speed for the marginal performance gains observed. Given the U-Net model and its variants represent one of the most commonly applied image segmentation methods, the consistent findings across several datasets here are likely to translate to many other OCT datasets and studies. This will provide significant value by saving time and cost in experimentation and model development as well as reduced inference time in practice by selecting simpler models.
Collapse
|
12
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
13
|
OCT Retinal and Choroidal Layer Instance Segmentation Using Mask R-CNN. SENSORS 2022; 22:s22052016. [PMID: 35271165 PMCID: PMC8914986 DOI: 10.3390/s22052016] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 11/16/2022]
Abstract
Optical coherence tomography (OCT) of the posterior segment of the eye provides high-resolution cross-sectional images that allow visualization of individual layers of the posterior eye tissue (the retina and choroid), facilitating the diagnosis and monitoring of ocular diseases and abnormalities. The manual analysis of retinal OCT images is a time-consuming task; therefore, the development of automatic image analysis methods is important for both research and clinical applications. In recent years, deep learning methods have emerged as an alternative method to perform this segmentation task. A large number of the proposed segmentation methods in the literature focus on the use of encoder–decoder architectures, such as U-Net, while other architectural modalities have not received as much attention. In this study, the application of an instance segmentation method based on region proposal architecture, called the Mask R-CNN, is explored in depth in the context of retinal OCT image segmentation. The importance of adequate hyper-parameter selection is examined, and the performance is compared with commonly used techniques. The Mask R-CNN provides a suitable method for the segmentation of OCT images with low segmentation boundary errors and high Dice coefficients, with segmentation performance comparable with the commonly used U-Net method. The Mask R-CNN has the advantage of a simpler extraction of the boundary positions, especially avoiding the need for a time-consuming graph search method to extract boundaries, which reduces the inference time by 2.5 times compared to U-Net, while segmenting seven retinal layers.
Collapse
|
14
|
Muller J, Alonso-Caneiro D, Read SA, Vincent SJ, Collins MJ. Application of Deep Learning Methods for Binarization of the Choroid in Optical Coherence Tomography Images. Transl Vis Sci Technol 2022; 11:23. [PMID: 35157030 PMCID: PMC8857621 DOI: 10.1167/tvst.11.2.23] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose The purpose of this study was to develop a deep learning model for automatic binarization of the choroidal tissue, separating choroidal blood vessels from nonvascular stromal tissue, in optical coherence tomography (OCT) images from healthy young subjects. Methods OCT images from an observational longitudinal study of 100 children were used for training, validation, and testing of 5 fully semantic networks, which provided a binarized output of the choroid. These outputs were compared with ground truth images, generated from a local binarization technique after manually optimizing the analysis window size for each individual image. The performance was evaluated using accuracy and repeatability metrics. The methods were also compared with a fixed window size local binarization technique, which has been commonly used previously. Results The tested deep learning methods provided a good performance in terms of accuracy and repeatability. With the U-Net and SegNet networks showing >96% accuracy. All methods displayed a high level of repeatability relative to the ground truth. For analysis of the choroidal vascularity index (a commonly used metric derived from the binarized image), SegNet showed the closest agreement with the ground truth and high repeatability. The fixed window size showed a reduced accuracy compared to other methods. Conclusions Fully semantic networks such as U-Net and SegNet displayed excellent performance for the binarization task. These methods provide a useful approach for clinical and research applications of deep learning tools for the binarization of the choroid in OCT images. Translational Relevance Deep learning models provide a novel, robust solution to automatically binarize the choroidal tissue in OCT images.
Collapse
Affiliation(s)
- Joshua Muller
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Queensland, Australia
| | - David Alonso-Caneiro
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Queensland, Australia
| | - Scott A. Read
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Queensland, Australia
| | - Stephen J. Vincent
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Queensland, Australia
| | - Michael J. Collins
- Queensland University of Technology (QUT), Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Kelvin Grove, Queensland, Australia
| |
Collapse
|
15
|
Heath Jeffery RC, Thompson JA, Lo J, Lamey TM, McLaren TL, McAllister IL, Constable IJ, De Roach JN, Chen FK. Genotype-Specific Lesion Growth Rates in Stargardt Disease. Genes (Basel) 2021; 12:1981. [PMID: 34946930 PMCID: PMC8701386 DOI: 10.3390/genes12121981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 12/03/2021] [Accepted: 12/10/2021] [Indexed: 01/10/2023] Open
Abstract
Reported growth rates (GR) of atrophic lesions in Stargardt disease (STGD1) vary widely. In the present study, we report the longitudinal natural history of patients with confirmed biallelic ABCA4 mutations from five genotype groups: c.6079C>T, c.[2588G>C;5603A>T], c.3113C>T, c.5882G>A and c.5603A>T. Fundus autofluorescence (AF) 30° × 30° images were manually segmented for boundaries of definitely decreased autofluorescence (DDAF). The primary outcome was the effective radius GR across five genotype groups. The age of DDAF formation in each eye was calculated using the x-intercept of the DDAF effective radius against age. Discordance between age at DDAF formation and symptom onset was compared. A total of 75 eyes from 39 STGD1 patients (17 male [44%]; mean ± SD age 45 ± 19 years; range 21-86) were recruited. Patients with c.3113C>T or c.6079C>T had a significantly faster effective radius GR at 0.17 mm/year (95% CI 0.12 to 0.22; p < 0.001 and 0.14 to 0.21; p < 0.001) respectively, as compared to those patients harbouring c.5882G>A at 0.06 mm/year (95% CI 0.03-0.09), respectively. Future clinical trial design should consider the effect of genotype on the effective radius GR and the timing of DDAF formation relative to symptom onset.
Collapse
Affiliation(s)
- Rachael C. Heath Jeffery
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
- Department of Ophthalmology, Royal Perth Hospital, Perth, WA 6000, Australia
| | - Jennifer A. Thompson
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia;
| | - Johnny Lo
- School of Science, Edith Cowan University, Joondalup, WA 6027, Australia;
| | - Tina M. Lamey
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia;
| | - Terri L. McLaren
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia;
| | - Ian L. McAllister
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
| | - Ian J. Constable
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
| | - John N. De Roach
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia;
| | - Fred K. Chen
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Nedlands, WA 6009, Australia; (R.C.H.J.); (T.M.L.); (T.L.M.); (I.L.M.); (I.J.C.); (J.N.D.R.)
- Department of Ophthalmology, Royal Perth Hospital, Perth, WA 6000, Australia
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia;
| |
Collapse
|
16
|
Wang YZ, Wu W, Birch DG. A Hybrid Model Composed of Two Convolutional Neural Networks (CNNs) for Automatic Retinal Layer Segmentation of OCT Images in Retinitis Pigmentosa (RP). Transl Vis Sci Technol 2021; 10:9. [PMID: 34751740 PMCID: PMC8590180 DOI: 10.1167/tvst.10.13.9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Purpose We propose and evaluate a hybrid model composed of two convolutional neural networks (CNNs) with different architectures for automatic segmentation of retina layers in spectral domain optical coherence tomography (SD-OCT) B-scans of retinitis pigmentosa (RP). Methods The hybrid model consisted of a U-Net for initial semantic segmentation and a sliding-window (SW) CNN for refinement by correcting the segmentation errors of U-Net. The U-Net construction followed Ronneberger et al. (2015) with an input image size of 256 × 32. The SW model was similar to our previously reported approach. Training image patches were generated from 480 horizontal midline B-scans obtained from 220 patients with RP and 20 normal participants. Testing images were 160 midline B-scans from a separate group of 80 patients with RP. The Spectralis segmentation of B-scans was manually corrected for the boundaries of the inner limiting membrane, inner nuclear layer, ellipsoid zone (EZ), retinal pigment epithelium, and Bruch's membrane by one grader for the training set and two for the testing set. The trained U-Net and SW, as well as the hybrid model, were used to classify all pixels in the testing B-scans. Bland–Altman and correlation analyses were conducted to compare layer boundary lines, EZ width, and photoreceptor outer segment (OS) length and area determined by the models to those by human graders. Results The mean times to classify a B-scan image were 0.3, 65.7, and 2.4 seconds for U-Net, SW, and the hybrid model, respectively. The mean ± SD accuracies to segment retinal layers were 90.8% ± 4.8% and 90.7% ± 4.0% for U-Net and SW, respectively. The hybrid model improved mean ± SD accuracy to 91.5% ± 4.8% (P < 0.039 vs. U-Net), resulting in an improvement in layer boundary segmentation as revealed by Bland–Altman analyses. EZ width, OS length, and OS area measured by the models were highly correlated with those measured by the human graders (r > 0.95 for EZ width; r > 0.83 for OS length; r > 0.97 for OS area; P < 0.05). The hybrid model further improved the performance of measuring retinal layer thickness by correcting misclassification of retinal layers from U-Net. Conclusions While the performances of U-Net and the SW model were comparable in delineating various retinal layers, U-Net was much faster than the SW model to segment B-scan images. The hybrid model that combines the two improves automatic retinal layer segmentation from OCT images in RP. Translational Relevance A hybrid deep machine learning model composed of CNNs with different architectures can be more effective than either model separately for automatic analysis of SD-OCT scan images, which is becoming increasingly necessary with current high-resolution, high-density volume scans.
Collapse
Affiliation(s)
- Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, TX, USA.,Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, TX, USA
| | - Wenxuan Wu
- Retina Foundation of the Southwest, Dallas, TX, USA
| | - David G Birch
- Retina Foundation of the Southwest, Dallas, TX, USA.,Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, TX, USA
| |
Collapse
|
17
|
Huang D, Heath Jeffery RC, Aung-Htut MT, McLenachan S, Fletcher S, Wilton SD, Chen FK. Stargardt disease and progress in therapeutic strategies. Ophthalmic Genet 2021; 43:1-26. [PMID: 34455905 DOI: 10.1080/13816810.2021.1966053] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Background: Stargardt disease (STGD1) is an autosomal recessive retinal dystrophy due to mutations in ABCA4, characterized by subretinal deposition of lipofuscin-like substances and bilateral centrifugal vision loss. Despite the tremendous progress made in the understanding of STGD1, there are no approved treatments to date. This review examines the challenges in the development of an effective STGD1 therapy.Materials and Methods: A literature review was performed through to June 2021 summarizing the spectrum of retinal phenotypes in STGD1, the molecular biology of ABCA4 protein, the in vivo and in vitro models used to investigate the mechanisms of ABCA4 mutations and current clinical trials.Results: STGD1 phenotypic variability remains an challenge for clinical trial design and patient selection. Pre-clinical development of therapeutic options has been limited by the lack of animal models reflecting the diverse phenotypic spectrum of STDG1. Patient-derived cell lines have facilitated the characterization of splice mutations but the clinical presentation is not always predicted by the effect of specific mutations on retinoid metabolism in cellular models. Current therapies primarily aim to delay vision loss whilst strategies to restore vision are less well developed.Conclusions: STGD1 therapy development can be accelerated by a deeper understanding of genotype-phenotype correlations.
Collapse
Affiliation(s)
- Di Huang
- Centre for Molecular Medicine and Innovative Therapeutics, Murdoch University, Western Australia, Australia.,Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), the University of Western Australia, Nedlands, Western Australia, Australia.,Perron Institute for Neurological and Translational Science & the University of Western Australia, Nedlands, Western Australia, Australia
| | - Rachael C Heath Jeffery
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), the University of Western Australia, Nedlands, Western Australia, Australia
| | - May Thandar Aung-Htut
- Centre for Molecular Medicine and Innovative Therapeutics, Murdoch University, Western Australia, Australia.,Perron Institute for Neurological and Translational Science & the University of Western Australia, Nedlands, Western Australia, Australia
| | - Samuel McLenachan
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), the University of Western Australia, Nedlands, Western Australia, Australia
| | - Sue Fletcher
- Centre for Molecular Medicine and Innovative Therapeutics, Murdoch University, Western Australia, Australia.,Perron Institute for Neurological and Translational Science & the University of Western Australia, Nedlands, Western Australia, Australia
| | - Steve D Wilton
- Centre for Molecular Medicine and Innovative Therapeutics, Murdoch University, Western Australia, Australia.,Perron Institute for Neurological and Translational Science & the University of Western Australia, Nedlands, Western Australia, Australia
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), the University of Western Australia, Nedlands, Western Australia, Australia.,Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and Physics, Sir Charles Gairdner Hospital, Nedlands, Western Australia, Australia.,Department of Ophthalmology, Royal Perth Hospital, Perth, Western Australia, Australia.,Department of Ophthalmology, Perth Children's Hospital, Nedlands, Western Australia, Australia
| |
Collapse
|
18
|
Heath Jeffery RC, Chen FK. Stargardt disease: Multimodal imaging: A review. Clin Exp Ophthalmol 2021; 49:498-515. [PMID: 34013643 PMCID: PMC8366508 DOI: 10.1111/ceo.13947] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 05/15/2021] [Indexed: 12/20/2022]
Abstract
Stargardt disease (STGD1) is an autosomal recessive retinal dystrophy, characterised by bilateral progressive central vision loss and subretinal deposition of lipofuscin-like substances. Recent advances in molecular diagnosis and therapeutic options are complemented by the increasing recognition of new multimodal imaging biomarkers that may predict genotype and disease progression. Unique non-invasive imaging features of STDG1 are useful for gene variant interpretation and may even provide insight into the underlying molecular pathophysiology. In addition, pathognomonic imaging features of STGD1 have been used to train neural networks to improve time efficiency in lesion segmentation and disease progression measurements. This review will discuss the role of key imaging modalities, correlate imaging signs across varied STGD1 presentations and illustrate the use of multimodal imaging as an outcome measure in determining the efficacy of emerging STGD1 specific therapies.
Collapse
Affiliation(s)
- Rachael C. Heath Jeffery
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute)The University of Western AustraliaNedlandsWestern AustraliaAustralia
- Department of OphthalmologyRoyal Perth HospitalPerthWestern AustraliaAustralia
| | - Fred K. Chen
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute)The University of Western AustraliaNedlandsWestern AustraliaAustralia
- Department of OphthalmologyRoyal Perth HospitalPerthWestern AustraliaAustralia
- Australian Inherited Retinal Disease Registry and DNA Bank, Department of Medical Technology and PhysicsSir Charles Gairdner HospitalPerthWestern AustraliaAustralia
- Department of OphthalmologyPerth Children's HospitalNedlandsWestern AustraliaAustralia
| |
Collapse
|
19
|
Mishra Z, Wang Z, Sadda SR, Hu Z. Automatic Segmentation in Multiple OCT Layers For Stargardt Disease Characterization Via Deep Learning. Transl Vis Sci Technol 2021; 10:24. [PMID: 34004000 PMCID: PMC8083069 DOI: 10.1167/tvst.10.4.24] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 03/15/2021] [Indexed: 02/07/2023] Open
Abstract
Purpose This study sought to perform automated segmentation of 11 retinal layers and Stargardt-associated features on spectral-domain optical coherence tomography (SD-OCT) images and to analyze differences between normal eyes and eyes diagnosed with Stargardt disease. Methods Automated segmentation was accomplished through application of the deep learning-shortest path (DL-SP) framework, a shortest path segmentation approach that is enhanced by a deep learning fully convolutional neural network. To compare normal eyes and eyes diagnosed with Stargardt disease, various retinal layer thickness and intensity feature maps associated with the outer retinal layers were generated. Results The automated DL-SP approach achieved a mean difference within a subpixel accuracy range for all layers when compared to manually traced layers by expert graders. The algorithm achieved mean and absolute mean differences in border positions for Stargardt features of -0.11 ± 4.17 pixels and 1.92 ± 3.71 pixels, respectively. In several of the feature maps generated, the characteristic Stargardt features of flecks and atrophic-appearing lesions were readily visualized. Conclusions To the best of our knowledge, this is the first automated algorithm for 11 retinal layer segmentation on OCT in eyes with Stargardt disease, and, furthermore, the feature differences found between eyes diagnosed with Stargardt disease and normal eyes may inform new insights and the better understanding of retinal characteristic morphologic changes caused by Stargardt disease. Translational Relevance The automated algorithm's performance and the feature differences found using the algorithm's segmentation support the future applications of SD-OCT for the quantitative monitoring of Stargardt disease.
Collapse
Affiliation(s)
- Zubin Mishra
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Los Angeles, CA, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Ziyuan Wang
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Los Angeles, CA, USA
- The University of California, Los Angeles, CA, USA
| | - SriniVas R. Sadda
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Los Angeles, CA, USA
- The University of California, Los Angeles, CA, USA
| | - Zhihong Hu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Los Angeles, CA, USA
| |
Collapse
|