1
|
Adhan I, Warr E, Grieshop J, Kreis J, Nikezic D, Walesa A, Hemsworth K, Cooper RF, Carroll J. Intervisit Reproducibility of Foveal Cone Density Metrics. Transl Vis Sci Technol 2024; 13:18. [PMID: 38913007 PMCID: PMC11205225 DOI: 10.1167/tvst.13.6.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/08/2024] [Indexed: 06/25/2024] Open
Abstract
Purpose To assess longitudinal reproducibility of metrics of foveal density (peak cone density [PCD], cone density centroid [CDC], and 80th percentile centroid area) in participants with normal vision. Methods Participants (n = 19; five male and 14 female) were imaged at two time points (average interval of 3.2 years) using an adaptive optics scanning light ophthalmoscope (AOSLO). Foveally centered regions of interest (ROIs) were extracted from AOSLO montages. Cone coordinate matrices were semiautomatically derived for each ROI, and cone mosaic metrics were calculated. Results On average, there were no significant changes in cone mosaic metrics between visits. The average ± SD PCD was 187,000 ± 20,000 cones/mm2 and 189,000 ± 21,700 cones/mm2 for visits 1 and 2, respectively (P = 0.52). The average ± SD density at the CDC was 183,000 ± 19,000 cones/mm2 and 184,000 ± 20,800 cones/mm2 for visits 1 and 2, respectively (P = 0.78). The average ± SD 80th percentile isodensity contour area was 15,400 ± 1800 µm2 and 15,600 ± 1910 µm2 for visits 1 and 2, respectively (P = 0.57). Conclusions Foveal cone mosaic density metrics were highly reproducible in the cohort examined here, although further study is required in more diverse populations. Translational Relevance Determination of the normative longitudinal changes in foveal cone topography is key for evaluating longitudinal measures of foveal cone topography in patients with progressive retinal dystrophies.
Collapse
Affiliation(s)
- Iniya Adhan
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Emma Warr
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jenna Grieshop
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, USA
| | - Joseph Kreis
- Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Danica Nikezic
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Ashleigh Walesa
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Katherine Hemsworth
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Robert F. Cooper
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, USA
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, USA
- Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
2
|
Warr E, Grieshop J, Cooper RF, Carroll J. The effect of sampling window size on topographical maps of foveal cone density. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1348950. [PMID: 38984138 PMCID: PMC11182112 DOI: 10.3389/fopht.2024.1348950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 03/13/2024] [Indexed: 07/11/2024]
Abstract
Purpose To characterize the effect of sampling window size on maps of foveal cone density derived from adaptive optics scanning light ophthalmoscope (AOSLO) images of the cone mosaic. Methods Forty-four AOSLO-derived montages of the foveal cone mosaic (300 x 300µm) were used for this study (from 44 individuals with normal vision). Cone photoreceptor coordinates were semi-automatically identified by one experienced grader. From these coordinates, cone density matrices across each foveal montage were derived using 10 different sampling window sizes containing 5, 10, 15, 20, 40, 60, 80, 100, 150, or 200 cones. For all 440 density matrices, we extracted the location and value of peak cone density (PCD), the cone density centroid (CDC) location, and cone density at the CDC. Results Across all window sizes, PCD values were larger than those extracted at the CDC location, though the difference between these density values decreased as the sampling window size increased (p<0.0001). Overall, both PCD (r=-0.8099, p=0.0045) and density at the CDC (r=-0.7596, p=0.0108) decreased with increasing sampling window size. This reduction was more pronounced for PCD, with a 27.8% lower PCD value on average when using the 200-cone versus the 5-cone window (compared to only a 3.5% reduction for density at the CDC between these same window sizes). While the PCD and CDC locations did not occur at the same location within a given montage, there was no significant relationship between this PCD-CDC offset and sampling window size (p=0.8919). The CDC location was less variable across sampling windows, with an average per-participant 95% confidence ellipse area across the 10 window sizes of 47.56µm² (compared to 844.10µm² for the PCD location, p<0.0001). Conclusion CDC metrics appear more stable across varying sampling window sizes than PCD metrics. Understanding how density values change according to the method used to sample the cone mosaic may facilitate comparing cone density data across different studies.
Collapse
Affiliation(s)
- Emma Warr
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jenna Grieshop
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Robert F Cooper
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States
- Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
- Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
3
|
Liu R, Wang X, Hoshi S, Zhang Y. Substrip-based registration and automatic montaging of adaptive optics retinal images. BIOMEDICAL OPTICS EXPRESS 2024; 15:1311-1330. [PMID: 38404341 PMCID: PMC10890855 DOI: 10.1364/boe.514447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/27/2024]
Abstract
Precise registration and montage are critical for high-resolution adaptive optics retinal image analysis but are challenged by rapid eye movement. We present a substrip-based method to improve image registration and facilitate the automatic montaging of adaptive optics scanning laser ophthalmoscopy (AOSLO). The program first batches the consecutive images into groups based on a translation threshold and selects an image with minimal distortion within each group as the reference. Within each group, the software divides each image into multiple strips and calculates the Normalized Cross-Correlation with the reference frame using two substrips at both ends of the whole strip to estimate the strip translation, producing a registered image. Then, the software aligns the registered images of all groups also using a substrip based registration, thereby generating a montage with cell-for-cell precision in the overlapping areas of adjacent frames. The algorithm was evaluated with AOSLO images acquired in human subjects with normal macular health and patients with age-related macular degeneration (AMD). Images with a motion amplitude of up to 448 pixels in the fast scanner direction over a frame of 512 × 512 pixels can be precisely registered. Automatic montage spanning up to 22.6 degrees on the retina was achieved on a cell-to-cell precision with a low misplacement rate of 0.07% (11/16,501 frames) in normal eyes and 0.51% (149/29,051 frames) in eyes with AMD. Substrip based registration significantly improved AOSLO registration accuracy.
Collapse
Affiliation(s)
- Ruixue Liu
- Doheny Eye Institute, Pasadena, CA 91103, USA
| | | | - Sujin Hoshi
- Doheny Eye Institute, Pasadena, CA 91103, USA
- Department of Ophthalmology, University of California - Los Angeles, Los Angeles, CA 90024, USA
- Department of Ophthalmology, University of Tsukuba, Ibaraki, Japan
| | - Yuhua Zhang
- Doheny Eye Institute, Pasadena, CA 91103, USA
- Department of Ophthalmology, University of California - Los Angeles, Los Angeles, CA 90024, USA
| |
Collapse
|
4
|
Williams DR, Burns SA, Miller DT, Roorda A. Evolution of adaptive optics retinal imaging [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:1307-1338. [PMID: 36950228 PMCID: PMC10026580 DOI: 10.1364/boe.485371] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/02/2023] [Indexed: 05/02/2023]
Abstract
This review describes the progress that has been achieved since adaptive optics (AO) was incorporated into the ophthalmoscope a quarter of a century ago, transforming our ability to image the retina at a cellular spatial scale inside the living eye. The review starts with a comprehensive tabulation of AO papers in the field and then describes the technological advances that have occurred, notably through combining AO with other imaging modalities including confocal, fluorescence, phase contrast, and optical coherence tomography. These advances have made possible many scientific discoveries from the first maps of the topography of the trichromatic cone mosaic to exquisitely sensitive measures of optical and structural changes in photoreceptors in response to light. The future evolution of this technology is poised to offer an increasing array of tools to measure and monitor in vivo retinal structure and function with improved resolution and control.
Collapse
Affiliation(s)
- David R. Williams
- The Institute of Optics and the Center for
Visual Science, University of Rochester,
Rochester NY, USA
| | - Stephen A. Burns
- School of Optometry, Indiana
University at Bloomington, Bloomington IN, USA
| | - Donald T. Miller
- School of Optometry, Indiana
University at Bloomington, Bloomington IN, USA
| | - Austin Roorda
- Herbert Wertheim School of Optometry and
Vision Science, University of California at Berkeley, Berkeley CA, USA
| |
Collapse
|
5
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
6
|
Hofmann J, Domdei L, Jainta S, Harmening WM. Assessment of binocular fixational eye movements including cyclotorsion with split-field binocular scanning laser ophthalmoscopy. J Vis 2022; 22:5. [PMID: 36069941 PMCID: PMC9465939 DOI: 10.1167/jov.22.10.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Fixational eye movements are a hallmark of human gaze behavior, yet little is known about how they interact between fellow eyes. Here, we designed, built and validated a split-field binocular scanning laser ophthalmoscope to record high-resolution eye motion traces from both eyes of six observers during fixation in different binocular vergence conditions. In addition to microsaccades and drift, torsional eye motion could be extracted, with a spatial measurement error of less than 1 arcmin. Microsaccades were strongly coupled between fellow eyes under all conditions. No monocular microsaccade occurred and no significant delay between microsaccade onsets across fellow eyes could be detected. Cyclotorsion was also firmly coupled between both eyes, occurring typically in conjugacy, with gradual changes during drift and abrupt changes during saccades.
Collapse
Affiliation(s)
- Julia Hofmann
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany.,Fraunhofer Institute for Optronics, Systems Technologies and Image Exploitations IOSB, Karlsruhe, Germany., https://www.iosb.fraunhofer.de/en.html
| | - Lennart Domdei
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany., https://ao.ukbonn.de/
| | - Stephanie Jainta
- SRH University of Applied Sciences in North Rhine-Westphalia, Hamm, Germany., https://www.srh-hochschule-nrw.de/
| | - Wolf M Harmening
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany., https://ao.ukbonn.de/
| |
Collapse
|
7
|
Hu X, Yang Q. Real-time correction of image rotation with adaptive optics scanning light ophthalmoscopy. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1663-1672. [PMID: 36215635 DOI: 10.1364/josaa.465889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Fixational eye motion includes typical translation and torsion. In the registration of images from adaptive optics scanning light ophthalmoscopy (AOSLO), image rotation due to eye torsion and/or head rotation is often ignored because (a) the amount of rotation is trivial compared to translation within a short duration of imaging or recording time and (b) computational cost increases substantially when the registration algorithm involves simultaneous detection of rotation and translation. However, it becomes critically important under cases such as long exposure, functional measurements, and precise motion tracking. We developed a fast method to detect and correct rotation from AOSLO images, together with the detection of strip-level motion translation. The computational cost for rotation detection and correction alone is about 5 ms/frame (512×512 pixels) on an nVidia GTX960M GPU. Image quality is compared with and without rotation correction from 10 healthy human subjects and 8 diseased eyes with a total of 180 videos. The results show that residual image motions between the reference images and the registered images with rotation correction are a fraction of those without rotation correction, and the ratio is 0.74-0.89 at the image center and 0.37-0.51 at the four corners of the images.
Collapse
|
8
|
Chen M, Jiang YY, Gee JC, Brainard DH, Morgan JIW. Automated Assessment of Photoreceptor Visibility in Adaptive Optics Split-Detection Images Using Edge Detection. Transl Vis Sci Technol 2022; 11:25. [PMID: 35608855 PMCID: PMC9145033 DOI: 10.1167/tvst.11.5.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Adaptive optics scanning laser ophthalmoscopy (AOSLO) is a high-resolution imaging modality that allows measurements of cellular-level retinal changes in living patients. In retinal diseases, the visibility of photoreceptors in AOSLO images is affected by pathology, patient motion, and optics, which can lead to variability in analyses of the photoreceptor mosaic. Current best practice for AOSLO mosaic quantification requires manual assessment of photoreceptor visibility across overlapping images, a laborious and time-consuming task. Methods We propose an automated measure for quantification of photoreceptor visibility in AOSLO. Our method detects salient edge features, which can represent visible photoreceptor boundaries in each image. We evaluate our measure against two human graders and two standard automated image quality assessment algorithms. Results We evaluate the accuracy of pairwise ordering (PO) and the correlation of ordinal rankings (ORs) of photoreceptor visibility in 29 retinal regions, taken from five subjects with choroideremia. The proposed measure had high association with manual assessments (Grader 1: PO = 0.71, OR = 0.61; Grader 2: PO = 0.67, OR = 0.62), which is comparable with intergrader reliability (PO = 0.76, OR = 0.75) and outperforms the top standard approach (PO = 0.57; OR = 0.46). Conclusions Our edge-based measure can automatically assess photoreceptor visibility and order overlapping images within AOSLO montages. This can significantly reduce the manual labor required to generate high-quality AOSLO montages and enables higher throughput for quantitative studies of photoreceptors. Translational Relevance Automated assessment of photoreceptor visibility allows us to more rapidly quantify photoreceptor morphology in the living eye. This has applications to ophthalmic medicine by allowing detailed characterization of retinal degenerations, thus yielding potential biomarkers of treatment safety and efficacy.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Yu You Jiang
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| | - James C Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
9
|
Gaffney M, Cooper RF, Cava JA, Follett HM, Salmon AE, Freling S, Yu CT, Merriman DK, Carroll J. Cone photoreceptor reflectance variation in the northern tree shrew and thirteen-lined ground squirrel. Exp Biol Med (Maywood) 2021; 246:2192-2201. [PMID: 34308656 DOI: 10.1177/15353702211029582] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
In vivo images of human cone photoreceptors have been shown to vary in their reflectance both spatially and temporally. While it is generally accepted that the unique anatomy and physiology of the photoreceptors themselves drives this behavior, the exact mechanisms have not been fully elucidated as most studies on these phenomena have been limited to the human retina. Unlike humans, animal models offer the ability to experimentally manipulate the retina and perform direct in vivo and ex vivo comparisons. The thirteen-lined ground squirrel and northern tree shrew are two emerging animal models being used in vision research. Both models feature cone-dominant retinas, overcoming a key limitation of traditional rodent models. Additionally, each possesses unique but well-documented anatomical differences in cone structure compared to human cones, which can be leveraged to further constrain theoretical models of light propagation within photoreceptors. Here we sought to characterize the spatial and temporal reflectance behavior of cones in these species. Adaptive optics scanning light ophthalmoscopy (AOSLO) was used to non-invasively image the photoreceptors of both species at 5 to 10 min intervals over the span of 18 to 25 min. The reflectance of individual cone photoreceptors was measured over time, and images at individual time points were used to assess the variability of cone reflectance across the cone mosaic. Variability in spatial and temporal photoreceptor reflectance was observed in both species, with similar behavior to that seen in human AOSLO images. Despite the unique cone structure in these animals, these data suggest a common origin of photoreceptor reflectance behavior across species. Such data may help constrain models of the cellular origins of photoreceptor reflectance signals. These animal models provide an experimental platform to further explore the morphological origins of light capture and propagation.
Collapse
Affiliation(s)
- Mina Gaffney
- Department of Ophthalmology & Visual Sciences, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Robert F Cooper
- Department of Ophthalmology & Visual Sciences, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA.,Department of Biomedical Engineering, 5505Marquette University, Milwaukee, WI 53233, USA
| | - Jenna A Cava
- Department of Ophthalmology & Visual Sciences, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Hannah M Follett
- Department of Ophthalmology & Visual Sciences, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Alexander E Salmon
- Department of Cell Biology, Neurobiology, & Anatomy, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA.,Translational Imaging Innovations, Inc., Hickory, NC 28601, USA
| | - Susan Freling
- 164174Max Planck Florida Institute for Neuroscience, Jupiter, FL 33458, USA
| | - Ching T Yu
- Department of Cell Biology, Neurobiology, & Anatomy, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Dana K Merriman
- Department of Biology, 14752University of Wisconsin Oshkosh, Oshkosh, WI 54901, USA
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA.,Department of Biomedical Engineering, 5505Marquette University, Milwaukee, WI 53233, USA.,Department of Cell Biology, Neurobiology, & Anatomy, 5506Medical College of Wisconsin, Milwaukee, WI 53226, USA
| |
Collapse
|
10
|
Abstract
The high power of the eye and optical components used to image it result in "static" distortion, remaining constant across acquired retinal images. In addition, raster-based systems sample points or lines of the image over time, suffering from "dynamic" distortion due to the constant motion of the eye. We recently described an algorithm which corrects for the latter problem but is entirely blind to the former. Here, we describe a new procedure termed "DIOS" (Dewarp Image by Oblique Shift) to remove static distortion of arbitrary type. Much like the dynamic correction method, it relies on locating the same tissue in multiple frames acquired as the eye moves through different gaze positions. Here, the resultant maps of pixel displacement are used to form a sparse system of simultaneous linear equations whose solution gives the common warp seen by all frames. We show that the method successfully handles torsional movement of the eye. We also show that the output of the previously described dynamic correction procedure may be used as input for this new procedure, recovering an image of the tissue that is, in principle, a faithful replica free of any type of distortion. The method could be extended beyond ocular imaging, to any kind of imaging system in which the image can move or be made to move across the detector.
Collapse
Affiliation(s)
- Phillip Bedggood
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Andrew Metha
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
11
|
Salmon AE, Cooper RF, Chen M, Higgins B, Cava JA, Chen N, Follett HM, Gaffney M, Heitkotter H, Heffernan E, Schmidt TG, Carroll J. Automated image processing pipeline for adaptive optics scanning light ophthalmoscopy. BIOMEDICAL OPTICS EXPRESS 2021; 12:3142-3168. [PMID: 34221651 PMCID: PMC8221964 DOI: 10.1364/boe.418079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 04/09/2021] [Accepted: 04/09/2021] [Indexed: 06/11/2023]
Abstract
To mitigate the substantial post-processing burden associated with adaptive optics scanning light ophthalmoscopy (AOSLO), we have developed an open-source, automated AOSLO image processing pipeline with both "live" and "full" modes. The live mode provides feedback during acquisition, while the full mode is intended to automatically integrate the copious disparate modules currently used in generating analyzable montages. The mean (±SD) lag between initiation and montage placement for the live pipeline was 54.6 ± 32.7s. The full pipeline reduced overall human operator time by 54.9 ± 28.4%, with no significant difference in resultant cone density metrics. The reduced overhead decreases both the technical burden and operating cost of AOSLO imaging, increasing overall clinical accessibility.
Collapse
Affiliation(s)
- Alexander E. Salmon
- Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Translational Imaging Innovations, Inc., Hickory, NC 28601, USA
| | - Robert F. Cooper
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI 53233, USA
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Min Chen
- Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Brian Higgins
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Jenna A. Cava
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Nickolas Chen
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Hannah M. Follett
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Mina Gaffney
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Heather Heitkotter
- Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Elizabeth Heffernan
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| | - Taly Gilat Schmidt
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI 53233, USA
| | - Joseph Carroll
- Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI 53233, USA
- Ophthalmology and Visual Sciences, Medical College of Wisconsin, 8701 W. Watertown Plank Rd., Milwaukee, WI 53226, USA
| |
Collapse
|
12
|
Young LK, Smithson HE. Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images. Sci Rep 2021; 11:11225. [PMID: 34045507 PMCID: PMC8160341 DOI: 10.1038/s41598-021-90389-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 05/04/2021] [Indexed: 12/13/2022] Open
Abstract
High resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.
Collapse
Affiliation(s)
- Laura K Young
- Biosciences Institute, Newcastle University, Newcastle, NE2 4HH, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
13
|
Zhang M, Gofas-Salas E, Leonard BT, Rui Y, Snyder VC, Reecher HM, Mecê P, Rossi EA. Strip-based digital image registration for distortion minimization and robust eye motion measurement from scanned ophthalmic imaging systems. BIOMEDICAL OPTICS EXPRESS 2021; 12:2353-2372. [PMID: 33996234 PMCID: PMC8086453 DOI: 10.1364/boe.418070] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/13/2021] [Accepted: 03/16/2021] [Indexed: 05/22/2023]
Abstract
Retinal image-based eye motion measurement from scanned ophthalmic imaging systems, such as scanning laser ophthalmoscopy, has allowed for precise real-time eye tracking at sub-micron resolution. However, the constraints of real-time tracking result in a high error tolerance that is detrimental for some eye motion measurement and imaging applications. We show here that eye motion can be extracted from image sequences when these constraints are lifted, and all data is available at the time of registration. Our approach identifies and discards distorted frames, detects coarse motion to generate a synthetic reference frame and then uses it for fine scale motion tracking with improved sensitivity over a larger area. We demonstrate its application here to tracking scanning laser ophthalmoscopy (TSLO) and adaptive optics scanning light ophthalmoscopy (AOSLO), and show that it can successfully capture most of the eye motion across each image sequence, leaving only between 0.1-3.4% of non-blink frames untracked, while simultaneously minimizing image distortions induced from eye motion. These improvements will facilitate precise measurement of fixational eye movements (FEMs) in TSLO and longitudinal tracking of individual cells in AOSLO.
Collapse
Affiliation(s)
- Min Zhang
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Denotes that each of these authors contributed equally to this work
| | - Elena Gofas-Salas
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Denotes that each of these authors contributed equally to this work
| | - Bianca T Leonard
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Yuhua Rui
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Eye center of Xiangya Hospital, Central South University; Hunan Key Laboratory of Ophthalmology; Changsha, Hunan 410008, China
| | - Valerie C Snyder
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Hope M Reecher
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Pedro Mecê
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Ethan A Rossi
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Department of Bioengineering, University of Pittsburgh Swanson School of Engineering, Pittsburgh, PA 15261, USA
- McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
14
|
Lu Y, Son T, Kim TH, Le D, Yao X. Virtually structured detection enables super-resolution ophthalmoscopy of rod and cone photoreceptors in human retina. Quant Imaging Med Surg 2021; 11:1060-1069. [PMID: 33654677 PMCID: PMC7829177 DOI: 10.21037/qims-20-542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 08/26/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND High resolution imaging is desirable for advanced study and clinical management of retinal diseases. However, spatial resolution of retinal imaging has been limited due to available numerical aperture and optical aberration of the ocular optics. This study is to develop and validate virtually structured detection (VSD) to surpass diffraction limit for resolution improvement in in vivo retinal imaging of awake human. METHODS A rapid line scanning laser ophthalmoscope (SLO) was constructed for in vivo retinal imaging. A high speed (25,000 kHz) camera was used for recording the two-dimensional (2D) light reflectance profile, corresponding to each focused line illumination. VSD was implemented to the 2D light reflectance profiles for super-resolution reconstruction. Because each 2D light reflectance profile was recorded within 40 μs, the intra-frame blur due to eye movements can be ignored. Digital registration was implemented to further compensate for inter-frame eye movements, before the VSD processing. Based on digital processing, the modulation transfer function (MTF) of the imaging system was derived for objective identification of the cut-off frequency of ocular optics, which is essential for robust VSD processing to ensure reliable super-resolution imaging. Dynamic motility analysis of the super-resolution images was implemented to further enhance the imaging contrast of retinal rod and cone photoreceptors. RESULTS The VSD based super-resolution SLO significantly improved image quality compared with equivalent wide-field imaging. In vivo observation of individual retinal photoreceptors has been demonstrated unambiguously. Dynamic motility analysis of the super-resolution images enhanced the contrast of retinal rod and cone photoreceptors, and revealed sub-cellular structures in cone photoreceptors. CONCLUSIONS In conjunction with rapid line-scan imaging and digital registration to minimize the effect of eye movements, VSD enabled resolution improvement to observe individual retinal photoreceptors without the involvement of adaptive optics (AO). An objective method has been developed to identify MTF to enable quantitative estimation of the cut-off frequency required for robust VSD processing.
Collapse
Affiliation(s)
- Yiming Lu
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Taeyoon Son
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Tae-Hoon Kim
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
15
|
Kurokawa K, Crowell JA, Do N, Lee JJ, Miller DT. Multi-reference global registration of individual A-lines in adaptive optics optical coherence tomography retinal images. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200266R. [PMID: 33410310 PMCID: PMC7787477 DOI: 10.1117/1.jbo.26.1.016001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 12/10/2020] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Adaptive optics optical coherence tomography (AO-OCT) technology enables non-invasive, high-resolution three-dimensional (3D) imaging of the retina and promises earlier detection of ocular disease. However, AO-OCT data are corrupted by eye-movement artifacts that must be removed in post-processing, a process rendered time-consuming by the immense quantity of data. AIM To efficiently remove eye-movement artifacts at the level of individual A-lines, including those present in any individual reference volume. APPROACH We developed a registration method that cascades (1) a 3D B-scan registration algorithm with (2) a global A-line registration algorithm for correcting torsional eye movements and image scaling and generating global motion-free coordinates. The first algorithm corrects 3D translational eye movements to a single reference volume, accelerated using parallel computing. The second algorithm combines outputs of multiple runs of the first algorithm using different reference volumes followed by an affine transformation, permitting registration of all images to a global coordinate system at the level of individual A-lines. RESULTS The 3D B-scan algorithm estimates and corrects 3D translational motions with high registration accuracy and robustness, even for volumes containing microsaccades. Averaging registered volumes improves our image quality metrics up to 22 dB. Implementation in CUDA™ on a graphics processing unit registers a 512 × 512 × 512 volume in only 10.6 s, 150 times faster than MATLAB™ on a central processing unit. The global A-line algorithm minimizes image distortion, improves regularity of the cone photoreceptor mosaic, and supports enhanced visualization of low-contrast retinal cellular features. Averaging registered volumes improves our image quality up to 9.4 dB. It also permits extending the imaging field of view (∼2.1 × ) and depth of focus (∼5.6 × ) beyond what is attainable with single-reference registration. CONCLUSIONS We can efficiently correct eye motion in all 3D at the level of individual A-lines using a global coordinate system.
Collapse
Affiliation(s)
- Kazuhiro Kurokawa
- Indiana University, School of Optometry, Bloomington, Indiana, United States
| | - James A. Crowell
- Indiana University, School of Optometry, Bloomington, Indiana, United States
| | - Nhan Do
- Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States
- Google, Mountain View, California, United States
| | - John J. Lee
- Purdue School of Engineering and Technology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States
| | - Donald T. Miller
- Indiana University, School of Optometry, Bloomington, Indiana, United States
| |
Collapse
|
16
|
Athwal A, Balaratnasingam C, Yu DY, Heisler M, Sarunic MV, Ju MJ. Optimizing 3D retinal vasculature imaging in diabetic retinopathy using registration and averaging of OCT-A. BIOMEDICAL OPTICS EXPRESS 2021; 12:553-570. [PMID: 33659089 PMCID: PMC7899521 DOI: 10.1364/boe.408590] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 11/06/2020] [Accepted: 12/07/2020] [Indexed: 05/29/2023]
Abstract
High resolution visualization of optical coherence tomography (OCT) and OCT angiography (OCT-A) data is required to fully take advantage of the imaging modality's three-dimensional nature. However, artifacts induced by patient motion often degrade OCT-A data quality. This is especially true for patients with deteriorated focal vision, such as those with diabetic retinopathy (DR). We propose a novel methodology for software-based OCT-A motion correction achieved through serial acquisition, volumetric registration, and averaging. Motion artifacts are removed via a multi-step 3D registration process, and visibility is significantly enhanced through volumetric averaging. We demonstrate that this method permits clear 3D visualization of retinal pathologies and their surrounding features, 3D visualization of inner retinal capillary connections, as well as reliable visualization of the choriocapillaris layer.
Collapse
Affiliation(s)
- Arman Athwal
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Chandrakumar Balaratnasingam
- Centre for Ophthalmology and Visual Science, University of Western Australia, Perth, Australia
- Lions Eye Institute, Nedlands, Western Australia, Australia
| | - Dao-Yi Yu
- Centre for Ophthalmology and Visual Science, University of Western Australia, Perth, Australia
- Lions Eye Institute, Nedlands, Western Australia, Australia
| | - Morgan Heisler
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Marinko V. Sarunic
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Myeong Jin Ju
- School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
- University of British Columbia, Department of Ophthalmology and Visual Sciences, 2550 Willow Street, Vancouver, BC, V5Z 3N9, Canada
- University of British Columbia, School of Biomedical Engineering, 251–2222 Health Sciences Mall, Vancouver, BC, V6 T 1Z3, Canada
| |
Collapse
|
17
|
Li Z, Pandiyan VP, Maloney-Bertelli A, Jiang X, Li X, Sabesan R. Correcting intra-volume distortion for AO-OCT using 3D correlation based registration. OPTICS EXPRESS 2020; 28:38390-38409. [PMID: 33379652 PMCID: PMC7771894 DOI: 10.1364/oe.410374] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/15/2020] [Accepted: 11/19/2020] [Indexed: 05/18/2023]
Abstract
Adaptive optics (AO) based ophthalmic imagers, such as scanning laser ophthalmoscopes (SLO) and optical coherence tomography (OCT), are used to evaluate the structure and function of the retina with high contrast and resolution. Fixational eye movements during a raster-scanned image acquisition lead to intra-frame and intra-volume distortion, resulting in an inaccurate reproduction of the underlying retinal structure. For three-dimensional (3D) AO-OCT, segmentation-based and 3D correlation based registration methods have been applied to correct eye motion and achieve a high signal-to-noise ratio registered volume. This involves first selecting a reference volume, either manually or automatically, and registering the image/volume stream against the reference using correlation methods. However, even within the chosen reference volume, involuntary eye motion persists and affects the accuracy with which the 3D retinal structure is finally rendered. In this article, we introduced reference volume distortion correction for AO-OCT using 3D correlation based registration and demonstrate a significant improvement in registration performance via a few metrics. Conceptually, the general paradigm follows that developed previously for intra-frame distortion correction for 2D raster-scanned images, as in an AOSLO, but extended here across all three spatial dimensions via 3D correlation analyses. We performed a frequency analysis of eye motion traces before and after intra-volume correction and revealed how periodic artifacts in eye motion estimates are effectively reduced upon correction. Further, we quantified how the intra-volume distortions and periodic artifacts in the eye motion traces, in general, decrease with increasing AO-OCT acquisition speed. Overall, 3D correlation based registration with intra-volume correction significantly improved the visualization of retinal structure and estimation of fixational eye movements.
Collapse
Affiliation(s)
- Zhenghan Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
- These authors contributed equally to this work
| | - Vimal Prabhu Pandiyan
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
- These authors contributed equally to this work
| | | | - Xiaoyun Jiang
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
| | - Xinyang Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
| | - Ramkumar Sabesan
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
| |
Collapse
|
18
|
Wynne N, Carroll J, Duncan JL. Promises and pitfalls of evaluating photoreceptor-based retinal disease with adaptive optics scanning light ophthalmoscopy (AOSLO). Prog Retin Eye Res 2020; 83:100920. [PMID: 33161127 DOI: 10.1016/j.preteyeres.2020.100920] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/28/2020] [Accepted: 10/31/2020] [Indexed: 12/15/2022]
Abstract
Adaptive optics scanning light ophthalmoscopy (AOSLO) allows visualization of the living human retina with exquisite single-cell resolution. This technology has improved our understanding of normal retinal structure and revealed pathophysiological details of a number of retinal diseases. Despite the remarkable capabilities of AOSLO, it has not seen the widespread commercial adoption and mainstream clinical success of other modalities developed in a similar time frame. Nevertheless, continued advancements in AOSLO hardware and software have expanded use to a broader range of patients. Current devices enable imaging of a number of different retinal cell types, with recent improvements in stimulus and detection schemes enabling monitoring of retinal function, microscopic structural changes, and even subcellular activity. This has positioned AOSLO for use in clinical trials, primarily as exploratory outcome measures or biomarkers that can be used to monitor disease progression or therapeutic response. AOSLO metrics could facilitate patient selection for such trials, to refine inclusion criteria or to guide the choice of therapy, depending on the presence, absence, or functional viability of specific cell types. Here we explore the potential of AOSLO retinal imaging by reviewing clinical applications as well as some of the pitfalls and barriers to more widespread clinical adoption.
Collapse
Affiliation(s)
- Niamh Wynne
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Joseph Carroll
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jacque L Duncan
- Department of Ophthalmology, University of California, San Francisco, CA, USA.
| |
Collapse
|
19
|
Morgan JIW, Chen M, Huang AM, Jiang YY, Cooper RF. Cone Identification in Choroideremia: Repeatability, Reliability, and Automation Through Use of a Convolutional Neural Network. Transl Vis Sci Technol 2020; 9:40. [PMID: 32855844 PMCID: PMC7424931 DOI: 10.1167/tvst.9.2.40] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 04/10/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Adaptive optics imaging has enabled the visualization of photoreceptors both in health and disease. However, there remains a need for automated accurate cone photoreceptor identification in images of disease. Here, we apply an open-source convolutional neural network (CNN) to automatically identify cones in images of choroideremia (CHM). We further compare the results to the repeatability and reliability of manual cone identifications in CHM. Methods We used split-detection adaptive optics scanning laser ophthalmoscopy to image the inner segment cone mosaic of 17 patients with CHM. Cones were manually identified twice by one experienced grader and once by two additional experienced graders in 204 regions of interest (ROIs). An open-source CNN either pre-trained on normal images or trained on CHM images automatically identified cones in the ROIs. True and false positive rates and Dice's coefficient were used to determine the agreement in cone locations between data sets. Interclass correlation coefficient was used to assess agreement in bound cone density. Results Intra- and intergrader agreement for cone density is high in CHM. CNN performance increased when it was trained on CHM images in comparison to normal, but had lower agreement than manual grading. Conclusions Manual cone identifications and cone density measurements are repeatable and reliable for images of CHM. CNNs show promise for automated cone selections, although additional improvements are needed to equal the accuracy of manual measurements. Translational Relevance These results are important for designing and interpreting longitudinal studies of cone mosaic metrics in disease progression or treatment intervention in CHM.
Collapse
Affiliation(s)
- Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| | - Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Andrew M Huang
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA
| | - Yu You Jiang
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| | - Robert F Cooper
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.,Currently at the Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin and the Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
20
|
Abstract
The study of fixational eye motion has implications for the neural and computational underpinnings of vision. One component of fixational eye motion is tremor, a high-frequency oscillatory jitter reported to be anywhere from ∼11-60 arcseconds in amplitude. In order to isolate the effects of tremor on the retinal image directly and in the absence of optical blur, high-frequency, high-resolution eye traces were collected in six subjects from videos recorded with an adaptive optics scanning laser ophthalmoscope. Videos were acquired while subjects engaged in an active fixation task where they fixated on a tumbling E stimulus and reported changes in its orientation. Spectral analysis was conducted on periods of ocular drift, with all drifts being concatenated together after removal of saccades from the trace. The resultant amplitude spectra showed a slight deviation from the traditional 1/f nature of optical drift in the frequency range of 50-100 Hz, which is indicative of tremor. However, this deviation rarely exceeded 1 arcsecond and the consequent standard deviation of retinal image motion over the tremor band (50-100 Hz) was just over 5 arcseconds. Given such a small amplitude, it is unlikely tremor will contribute in any meaningful way to the visual percept.
Collapse
Affiliation(s)
- Norick R Bowers
- School of Optometry and Vision Science Graduate Group, University of California-Berkeley, Berkeley, CA, USA
| | - Alexandra E Boehm
- School of Optometry and Vision Science Graduate Group, University of California-Berkeley, Berkeley, CA, USA
| | - Austin Roorda
- School of Optometry and Vision Science Graduate Group, University of California-Berkeley, Berkeley, CA, USA
| |
Collapse
|
21
|
Mecê P, Scholler J, Groux K, Boccara C. High-resolution in-vivo human retinal imaging using full-field OCT with optical stabilization of axial motion. BIOMEDICAL OPTICS EXPRESS 2020; 11:492-504. [PMID: 32010530 PMCID: PMC6968740 DOI: 10.1364/boe.381398] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 12/12/2019] [Accepted: 12/16/2019] [Indexed: 05/05/2023]
Abstract
Time-domain full-field OCT (FF-OCT) represents an imaging modality capable of recording high-speed en-face sections of a sample at a given depth. One of the biggest challenges to transfer this technique to image in-vivo human retina is the presence of continuous involuntary head and eye axial motion during image acquisition. In this paper, we demonstrate a solution to this problem by implementing an optical stabilization in an FF-OCT system. This was made possible by combining an FF-OCT system, an SD-OCT system, and a high-speed voice-coil translation stage. B-scans generated by the SD-OCT were used to measure the retina axial position and to drive the position of the high-speed voice coil translation stage, where the FF-OCT reference arm is mounted. Closed-loop optical stabilization reduced the RMS error by a factor of 7, significantly increasing the FF-OCT image acquisition efficiency. By these means, we demonstrate the capacity of the FF-OCT to resolve cone mosaic as close as 1.5 o from the fovea center with high consistency and without using adaptive optics.
Collapse
|
22
|
Chen M, Cooper RF, Gee JC, Brainard DH, Morgan JIW. Automatic longitudinal montaging of adaptive optics retinal images using constellation matching. BIOMEDICAL OPTICS EXPRESS 2019; 10:6476-6496. [PMID: 31853412 PMCID: PMC6913413 DOI: 10.1364/boe.10.006476] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 11/15/2019] [Accepted: 11/18/2019] [Indexed: 05/04/2023]
Abstract
Adaptive optics (AO) scanning laser ophthalmoscopy offers a non-invasive approach for observing the retina at a cellular level. Its high resolution capabilities have direct application for monitoring and treating retinal diseases by providing quantitative assessment of cone health and density across time. However, accurate longitudinal analysis of AO images requires that AO images from different sessions be aligned, such that cell-to-cell correspondences can be established between timepoints. Such alignment is currently done manually, a time intensive task that is restrictive for large longitudinal AO studies. Automated longitudinal montaging for AO images remains a challenge because the intensity pattern of imaged cone mosaics can vary significantly, even across short timespans. This limitation prevents existing intensity-based montaging approaches from being accurately applied to longitudinal AO images. In the present work, we address this problem by presenting a constellation-based method for performing longitudinal alignment of AO images. Rather than matching intensity similarities between images, our approach finds structural patterns in the cone mosaics and leverages these to calculate the correct alignment. These structural patterns are robust to intensity variations, allowing us to make accurate longitudinal alignments. We validate our algorithm using 8 longitudinal AO datasets, each with two timepoints separated 6-12 months apart. Our results show that the proposed method can produce longitudinal AO montages with cell-to-cell correspondences across the full extent of the montage. Quantitative assessment of the alignment accuracy shows that the algorithm is able to find longitudinal alignments whose accuracy is on par with manual alignments performed by a trained rater.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Robert F Cooper
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
- Currently at Joint Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, University of Pennsylvania, Philadelphia, PA 19104, USA
- Currently at Department of Ophthalmology, Medical College of Wisconsin, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - James C Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA 19104, USA
- Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
23
|
Cooper RF, Aguirre GK, Morgan JIW. Fully Automated Estimation of Spacing and Density for Retinal Mosaics. Transl Vis Sci Technol 2019; 8:26. [PMID: 31637106 PMCID: PMC6798313 DOI: 10.1167/tvst.8.5.26] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 08/16/2019] [Indexed: 12/21/2022] Open
Abstract
Purpose To introduce and validate a novel, fully automated algorithm for determining pointwise intercell distance (ICD) and cone density. Methods We obtained images of the photoreceptor mosaic from 14 eyes of nine subjects without retinal pathology at two time points using an adaptive optics scanning laser ophthalmoscope. To automatically determine ICD, the radial average of the discrete Fourier transform (DFT) of the image was analyzed using a multiscale, fit-based algorithm to find the modal spacing. We then converted the modal spacing to ICD by assuming a hexagonally packed mosaic. The reproducibility of the algorithm was assessed between the two datasets, and accuracy was evaluated by comparing the results against those calculated from manually identified cones. Finally, the algorithm was extended to determine pointwise ICD and density in montages by calculating modal spacing over an overlapping grid of regions of interest (ROIs). Results The differences of DFT-derived ICD between the test and validation datasets were 3.2% ± 3.5% (mean ± SD), consistent with the differences in directly calculated ICD (1.9% ± 2.9%). The average ICD derived by the automated method was not significantly different between the development and validation datasets and was equivalent to the directly calculated ICD. When applied to a full montage, the automated algorithm produced estimates of cone density across retinal eccentricity that well match prior empiric measurements. Conclusions We created an accurate, repeatable, and fully automated algorithm for determining ICD and density in both individual ROIs and across entire montages. Translational Relevance The use of fully automated and validated algorithms will enable rapid analysis over the full photoreceptor montage.
Collapse
Affiliation(s)
- Robert F Cooper
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Geoffrey K Aguirre
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
24
|
Mapping flow velocity in the human retinal capillary network with pixel intensity cross correlation. PLoS One 2019; 14:e0218918. [PMID: 31237930 PMCID: PMC6592569 DOI: 10.1371/journal.pone.0218918] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 06/13/2019] [Indexed: 01/01/2023] Open
Abstract
We present a new method for determining cellular velocity in the smallest retinal vascular networks as visualized with adaptive optics. The method operates by comparing the intensity profile of each movie pixel with that of every other pixel, after shifting in time by one frame. The time-shifted pixel which most resembles the reference pixel is deemed to be a 'source' or 'destination' of flow information for that pixel. Velocity in the transverse direction is then calculated by dividing the spatial displacement between the two pixels by the inter-frame period. We call this method pixel intensity cross-correlation, or "PIX". Here we compare measurements derived from PIX to two other state-of-the-art algorithms (particle image velocimetry and the spatiotemporal kymograph), as well as to manually tracked cell data. The examples chosen highlight the potential of the new algorithm to substantially improve spatial and temporal resolution, resilience to noise and aliasing, and assessment of network flow properties compared with existing methods.
Collapse
|
25
|
Azimipour M, Zawadzki RJ, Gorczynska I, Migacz J, Werner JS, Jonnal RS. Intraframe motion correction for raster-scanned adaptive optics images using strip-based cross-correlation lag biases. PLoS One 2018; 13:e0206052. [PMID: 30359401 PMCID: PMC6201912 DOI: 10.1371/journal.pone.0206052] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2018] [Accepted: 10/07/2018] [Indexed: 12/23/2022] Open
Abstract
In retinal raster imaging modalities, fixational eye movements manifest as image warp, where the relative positions of the beam and retina change during the acquisition of single frames. To remove warp artifacts, strip-based registration methods-in which fast-axis strips from target images are registered to a reference frame-have been applied in adaptive optics (AO) scanning light ophthalmoscopy (SLO) and optical coherence tomography (OCT). This approach has enabled object tracking and frame averaging, and methods have been described to automatically select reference frames with minimal motion. However, inconspicuous motion artifacts may persist in reference frames and propagate themselves throughout the processes of registration, tracking, and averaging. Here we test a previously proposed method for removing movement artifacts in reference frames, using biases in stripwise cross-correlation statistics. We applied the method to synthetic retinal images with simulated eye motion artifacts as well as real AO-SLO images of the cone mosaic and volumetric AO-OCT images, both affected by eye motion. In the case of synthetic images, the method was validated by direct comparison with motion-free versions of the images. In the case of real AO images, performance was validated by comparing the correlation of uncorrected images with that of corrected images, by quantifying the effect of motion artifacts on the image power spectra, and by qualitative examination of AO-OCT B-scans and en face projections. In all cases, the proposed method reduced motion artifacts and produced more faithful images of the retina.
Collapse
Affiliation(s)
- Mehdi Azimipour
- Vision Science and Advanced Retinal Imaging Laboratory (VSRI), Department of Ophthalmology and Vision Science, UC Davis Eye Center, Sacramento, CA, United States of America
| | - Robert J. Zawadzki
- Vision Science and Advanced Retinal Imaging Laboratory (VSRI), Department of Ophthalmology and Vision Science, UC Davis Eye Center, Sacramento, CA, United States of America
| | - Iwona Gorczynska
- Department of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Torun, Poland
| | - Justin Migacz
- Vision Science and Advanced Retinal Imaging Laboratory (VSRI), Department of Ophthalmology and Vision Science, UC Davis Eye Center, Sacramento, CA, United States of America
| | - John S. Werner
- Vision Science and Advanced Retinal Imaging Laboratory (VSRI), Department of Ophthalmology and Vision Science, UC Davis Eye Center, Sacramento, CA, United States of America
| | - Ravi S. Jonnal
- Vision Science and Advanced Retinal Imaging Laboratory (VSRI), Department of Ophthalmology and Vision Science, UC Davis Eye Center, Sacramento, CA, United States of America
| |
Collapse
|
26
|
Vienola KV, Damodaran M, Braaf B, Vermeer KA, de Boer JF. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope. BIOMEDICAL OPTICS EXPRESS 2018; 9:591-602. [PMID: 29552396 PMCID: PMC5854061 DOI: 10.1364/boe.9.000591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 12/14/2017] [Accepted: 12/19/2017] [Indexed: 05/04/2023]
Abstract
Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.
Collapse
Affiliation(s)
- Kari V. Vienola
- LaserLaB, Department of Physics and Astronomy, Vrije Universiteit Amsterdam, De Boelelaan 1081, HV Amsterdam, The Netherlands
| | - Mathi Damodaran
- LaserLaB, Department of Physics and Astronomy, Vrije Universiteit Amsterdam, De Boelelaan 1081, HV Amsterdam, The Netherlands
| | - Boy Braaf
- LaserLaB, Department of Physics and Astronomy, Vrije Universiteit Amsterdam, De Boelelaan 1081, HV Amsterdam, The Netherlands
| | - Koenraad A. Vermeer
- Rotterdam Ophthalmic Institute, Schiedamse Vest 160D, 3011 BH Rotterdam, The Netherlands
| | - Johannes F. de Boer
- LaserLaB, Department of Physics and Astronomy, Vrije Universiteit Amsterdam, De Boelelaan 1081, HV Amsterdam, The Netherlands
| |
Collapse
|