1
|
Moon B, Poletti M, Roorda A, Tiruveedhula P, Liu SH, Linebach G, Rucci M, Rolland JP. Alignment, calibration, and validation of an adaptive optics scanning laser ophthalmoscope for high-resolution human foveal imaging. APPLIED OPTICS 2024; 63:730-742. [PMID: 38294386 PMCID: PMC11062499 DOI: 10.1364/ao.504283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024]
Abstract
In prior art, advances in adaptive optics scanning laser ophthalmoscope (AOSLO) technology have enabled cones in the human fovea to be resolved in healthy eyes with normal vision and low to moderate refractive errors, providing new insight into human foveal anatomy, visual perception, and retinal degenerative diseases. These high-resolution ophthalmoscopes require careful alignment of each optical subsystem to ensure diffraction-limited imaging performance, which is necessary for resolving the smallest foveal cones. This paper presents a systematic and rigorous methodology for building, aligning, calibrating, and testing an AOSLO designed for imaging the cone mosaic of the central fovea in humans with cellular resolution. This methodology uses a two-stage alignment procedure and thorough system testing to achieve diffraction-limited performance. Results from retinal imaging of healthy human subjects under 30 years of age with refractive errors of less than 3.5 diopters using either 680 nm or 840 nm light show that the system can resolve cones at the very center of the fovea, the region where the cones are smallest and most densely packed.
Collapse
Affiliation(s)
- Benjamin Moon
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Martina Poletti
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Department of Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, CA 94720, USA
| | - Pavan Tiruveedhula
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, CA 94720, USA
| | - Soh Hang Liu
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Glory Linebach
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Jannick P. Rolland
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
2
|
Ashourizadeh H, Fakhri M, Hassanpour K, Masoudi A, Jalali S, Roshandel D, Chen FK. Pearls and Pitfalls of Adaptive Optics Ophthalmoscopy in Inherited Retinal Diseases. Diagnostics (Basel) 2023; 13:2413. [PMID: 37510157 PMCID: PMC10377978 DOI: 10.3390/diagnostics13142413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/12/2023] [Accepted: 07/18/2023] [Indexed: 07/30/2023] Open
Abstract
Adaptive optics (AO) retinal imaging enables individual photoreceptors to be visualized in the clinical setting. AO imaging can be a powerful clinical tool for detecting photoreceptor degeneration at a cellular level that might be overlooked through conventional structural assessments, such as spectral-domain optical coherence tomography (SD-OCT). Therefore, AO imaging has gained significant interest in the study of photoreceptor degeneration, one of the most common causes of inherited blindness. Growing evidence supports that AO imaging may be useful for diagnosing early-stage retinal dystrophy before it becomes apparent on fundus examination or conventional retinal imaging. In addition, serial AO imaging may detect structural disease progression in early-stage disease over a shorter period compared to SD-OCT. Although AO imaging is gaining popularity as a structural endpoint in clinical trials, the results should be interpreted with caution due to several pitfalls, including the lack of standardized imaging and image analysis protocols, frequent ocular comorbidities that affect image quality, and significant interindividual variation of normal values. Herein, we summarize the current state-of-the-art AO imaging and review its potential applications, limitations, and pitfalls in patients with inherited retinal diseases.
Collapse
Affiliation(s)
| | - Maryam Fakhri
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Sciences, Shahid Beheshti University of Medical Sciences, Tehran 16666, Iran
| | - Kiana Hassanpour
- Ophthalmic Research Center, Research Institute for Ophthalmology and Vision Sciences, Shahid Beheshti University of Medical Sciences, Tehran 16666, Iran
| | - Ali Masoudi
- Stein Eye Institute, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Sattar Jalali
- Department of Physics, Central Tehran Branch, Islamic Azad University, Tehran 19558, Iran
| | - Danial Roshandel
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, WA 6009, Australia
- Ocular Tissue Engineering Laboratory, Lions Eye Institute, Nedlands, WA 6009, Australia
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, WA 6009, Australia
- Ocular Tissue Engineering Laboratory, Lions Eye Institute, Nedlands, WA 6009, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, VIC 3002, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC 3010, Australia
| |
Collapse
|
3
|
Soltanian-Zadeh S, Liu Z, Liu Y, Lassoued A, Cukras CA, Miller DT, Hammer DX, Farsiu S. Deep learning-enabled volumetric cone photoreceptor segmentation in adaptive optics optical coherence tomography images of normal and diseased eyes. BIOMEDICAL OPTICS EXPRESS 2023; 14:815-833. [PMID: 36874491 PMCID: PMC9979662 DOI: 10.1364/boe.478693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 06/11/2023]
Abstract
Objective quantification of photoreceptor cell morphology, such as cell diameter and outer segment length, is crucial for early, accurate, and sensitive diagnosis and prognosis of retinal neurodegenerative diseases. Adaptive optics optical coherence tomography (AO-OCT) provides three-dimensional (3-D) visualization of photoreceptor cells in the living human eye. The current gold standard for extracting cell morphology from AO-OCT images involves the tedious process of 2-D manual marking. To automate this process and extend to 3-D analysis of the volumetric data, we propose a comprehensive deep learning framework to segment individual cone cells in AO-OCT scans. Our automated method achieved human-level performance in assessing cone photoreceptors of healthy and diseased participants captured with three different AO-OCT systems representing two different types of point scanning OCT: spectral domain and swept source.
Collapse
Affiliation(s)
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Yan Liu
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Ayoub Lassoued
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Catherine A. Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
4
|
Zhou M, Doble N, Choi SS, Jin T, Xu C, Parthasarathy S, Ramnath R. Using deep learning for the automated identification of cone and rod photoreceptors from adaptive optics imaging of the human retina. BIOMEDICAL OPTICS EXPRESS 2022; 13:5082-5097. [PMID: 36425636 PMCID: PMC9664895 DOI: 10.1364/boe.470071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/13/2022] [Accepted: 08/16/2022] [Indexed: 05/02/2023]
Abstract
Adaptive optics imaging has enabled the enhanced in vivo retinal visualization of individual cone and rod photoreceptors. Effective analysis of such high-resolution, feature rich images requires automated, robust algorithms. This paper describes RC-UPerNet, a novel deep learning algorithm, for identifying both types of photoreceptors, and was evaluated on images from central and peripheral retina extending out to 30° from the fovea in the nasal and temporal directions. Precision, recall and Dice scores were 0.928, 0.917 and 0.922 respectively for cones, and 0.876, 0.867 and 0.870 for rods. Scores agree well with human graders and are better than previously reported AI-based approaches.
Collapse
Affiliation(s)
- Mengxi Zhou
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Nathan Doble
- The Ohio State University, College of Optometry, 338 W 10th Ave., Columbus, OH 43210, USA
- The Ohio State University, Department of Ophthalmology and Visual Science, Havener Eye Institute, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Stacey S. Choi
- The Ohio State University, College of Optometry, 338 W 10th Ave., Columbus, OH 43210, USA
- The Ohio State University, Department of Ophthalmology and Visual Science, Havener Eye Institute, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Tianyu Jin
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Chenwei Xu
- The Ohio State University, Department of Statistics, 127 Pomerene Hall, 1760 Neil Ave, Columbus, OH 43212, USA
| | - Srinivasan Parthasarathy
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| | - Rajiv Ramnath
- The Ohio State University, Department of Computer Science and Engineering, 2015 Neil Ave., Columbus, OH 43210, USA
| |
Collapse
|
5
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
6
|
Evaluation of focus and deep learning methods for automated image grading and factors influencing image quality in adaptive optics ophthalmoscopy. Sci Rep 2021; 11:16641. [PMID: 34404857 PMCID: PMC8371000 DOI: 10.1038/s41598-021-96068-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 07/19/2021] [Indexed: 11/08/2022] Open
Abstract
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) is an established imaging tool in the investigation of retinal diseases. However, the clinical interpretation of AO-FIO images can be challenging due to varied image quality. Therefore, image quality assessment is essential before interpretation. An image assessment tool will also assist further work on improving the image quality, either during acquisition or post processing. In this paper, we describe, validate and compare two automated image quality assessment methods; the energy of Laplacian focus operator (LAPE; not commonly used but easily implemented) and convolutional neural network (CNN; effective but more complex approach). We also evaluate the effects of subject age, axial length, refractive error, fixation stability, disease status and retinal location on AO-FIO image quality. Based on analysis of 10,250 images of 50 × 50 μm size, at 41 retinal locations, from 50 subjects we demonstrate that CNN slightly outperforms LAPE in image quality assessment. CNN achieves accuracy of 89%, whereas LAPE metric achieves 73% and 80% (for a linear regression and random forest multiclass classifier methods, respectively) compared to ground truth. Furthermore, the retinal location, age and disease are factors that can influence the likelihood of poor image quality.
Collapse
|
7
|
Xie H, Zeng X, Lei H, Du J, Wang J, Zhang G, Cao J, Wang T, Lei B. Cross-attention multi-branch network for fundus diseases classification using SLO images. Med Image Anal 2021; 71:102031. [PMID: 33798993 DOI: 10.1016/j.media.2021.102031] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 01/24/2021] [Accepted: 03/03/2021] [Indexed: 12/23/2022]
Abstract
Fundus diseases classification is vital for the health of human beings. However, most of existing methods detect diseases by means of single angle fundus images, which lead to the lack of pathological information. To address this limitation, this paper proposes a novel deep learning method to complete different fundus diseases classification tasks using ultra-wide field scanning laser ophthalmoscopy (SLO) images, which have an ultra-wide field view of 180-200˚. The proposed deep model consists of multi-branch network, atrous spatial pyramid pooling module (ASPP), cross-attention and depth-wise attention module. Specifically, the multi-branch network employs the ResNet-34 model as the backbone to extract feature information, where the ResNet-34 model with two-branch is followed by the ASPP module to extract multi-scale spatial contextual features by setting different dilated rates. The depth-wise attention module can provide the global attention map from the multi-branch network, which enables the network to focus on the salient targets of interest. The cross-attention module adopts the cross-fusion mode to fuse the channel and spatial attention maps from the ResNet-34 model with two-branch, which can enhance the representation ability of the disease-specific features. The extensive experiments on our collected SLO images and two publicly available datasets demonstrate that the proposed method can outperform the state-of-the-art methods and achieve quite promising classification performance of the fundus diseases.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xianlu Zeng
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Haijun Lei
- Guangdong Province Key Laboratory of Popular High-performance Computers, School of Computer and Software Engineering, Shenzhen University, Shenzhen, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China.
| | - Jiuwen Cao
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Artificial Intelligence Institute, Hangzhou Dianzi University, Hangzhou, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
8
|
Song B. Optimization of the Progressive Image Mosaicing Algorithm in Fine Art Image Fusion for Virtual Reality. IEEE ACCESS 2021; 9:69559-69572. [DOI: 10.1109/access.2020.3022484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
9
|
Liu J, Han YJ, Liu T, Aguilera N, Tam J. Spatially Aware Dense-LinkNet Based Regression Improves Fluorescent Cell Detection in Adaptive Optics Ophthalmic Images. IEEE J Biomed Health Inform 2020; 24:3520-3528. [PMID: 32750947 DOI: 10.1109/jbhi.2020.3004271] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Retinal pigment epithelial (RPE) cells play an important role in nourishing retinal neurosensory photoreceptor cells, and numerous blinding diseases are associated with RPE defects. Their fluorescence signature can now be visualized in the living human eye using adaptive optics (AO) imaging combined with indocyanine green (ICG), which motivates us to develop an automated RPE detection method to improve the quantitative evaluation of RPE status in patients. This paper proposes a spatially-aware, Dense-LinkNet-based regression approach to improve the detection of in vivo fluorescent cell patterns, achieving precision, recall, and F1-Score of 93.6 ± 4.3%, 81.4 ± 9.5%, and 86.7 ± 5.7%, respectively. These results demonstrate the utility of incorporating spatial inputs into a deep learning-based regression framework for cell detection.
Collapse
|