1
|
Liu R, Wang X, Hoshi S, Zhang Y. Substrip-based registration and automatic montaging of adaptive optics retinal images. BIOMEDICAL OPTICS EXPRESS 2024; 15:1311-1330. [PMID: 38404341 PMCID: PMC10890855 DOI: 10.1364/boe.514447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/27/2024]
Abstract
Precise registration and montage are critical for high-resolution adaptive optics retinal image analysis but are challenged by rapid eye movement. We present a substrip-based method to improve image registration and facilitate the automatic montaging of adaptive optics scanning laser ophthalmoscopy (AOSLO). The program first batches the consecutive images into groups based on a translation threshold and selects an image with minimal distortion within each group as the reference. Within each group, the software divides each image into multiple strips and calculates the Normalized Cross-Correlation with the reference frame using two substrips at both ends of the whole strip to estimate the strip translation, producing a registered image. Then, the software aligns the registered images of all groups also using a substrip based registration, thereby generating a montage with cell-for-cell precision in the overlapping areas of adjacent frames. The algorithm was evaluated with AOSLO images acquired in human subjects with normal macular health and patients with age-related macular degeneration (AMD). Images with a motion amplitude of up to 448 pixels in the fast scanner direction over a frame of 512 × 512 pixels can be precisely registered. Automatic montage spanning up to 22.6 degrees on the retina was achieved on a cell-to-cell precision with a low misplacement rate of 0.07% (11/16,501 frames) in normal eyes and 0.51% (149/29,051 frames) in eyes with AMD. Substrip based registration significantly improved AOSLO registration accuracy.
Collapse
Affiliation(s)
- Ruixue Liu
- Doheny Eye Institute, Pasadena, CA 91103, USA
| | | | - Sujin Hoshi
- Doheny Eye Institute, Pasadena, CA 91103, USA
- Department of Ophthalmology, University of California - Los Angeles, Los Angeles, CA 90024, USA
- Department of Ophthalmology, University of Tsukuba, Ibaraki, Japan
| | - Yuhua Zhang
- Doheny Eye Institute, Pasadena, CA 91103, USA
- Department of Ophthalmology, University of California - Los Angeles, Los Angeles, CA 90024, USA
| |
Collapse
|
2
|
Williams DR, Burns SA, Miller DT, Roorda A. Evolution of adaptive optics retinal imaging [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:1307-1338. [PMID: 36950228 PMCID: PMC10026580 DOI: 10.1364/boe.485371] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/02/2023] [Indexed: 05/02/2023]
Abstract
This review describes the progress that has been achieved since adaptive optics (AO) was incorporated into the ophthalmoscope a quarter of a century ago, transforming our ability to image the retina at a cellular spatial scale inside the living eye. The review starts with a comprehensive tabulation of AO papers in the field and then describes the technological advances that have occurred, notably through combining AO with other imaging modalities including confocal, fluorescence, phase contrast, and optical coherence tomography. These advances have made possible many scientific discoveries from the first maps of the topography of the trichromatic cone mosaic to exquisitely sensitive measures of optical and structural changes in photoreceptors in response to light. The future evolution of this technology is poised to offer an increasing array of tools to measure and monitor in vivo retinal structure and function with improved resolution and control.
Collapse
Affiliation(s)
- David R. Williams
- The Institute of Optics and the Center for
Visual Science, University of Rochester,
Rochester NY, USA
| | - Stephen A. Burns
- School of Optometry, Indiana
University at Bloomington, Bloomington IN, USA
| | - Donald T. Miller
- School of Optometry, Indiana
University at Bloomington, Bloomington IN, USA
| | - Austin Roorda
- Herbert Wertheim School of Optometry and
Vision Science, University of California at Berkeley, Berkeley CA, USA
| |
Collapse
|
3
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
4
|
Alexiev K, Vakarelski T. Can Microsaccades Be Used for Biometrics? SENSORS (BASEL, SWITZERLAND) 2022; 23:89. [PMID: 36616687 PMCID: PMC9824634 DOI: 10.3390/s23010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
Human eyes are in constant motion. Even when we fix our gaze on a certain point, our eyes continue to move. When looking at a point, scientists have distinguished three different fixational eye movements (FEM)-microsaccades, drift and tremor. The main goal of this paper is to investigate one of these FEMs-microsaccades-as a source of information for biometric analysis. The paper argues why microsaccades are preferred for biometric analysis over the other two fixational eye movements. The process of microsaccades' extraction is described. Thirteen parameters are defined for microsaccade analysis, and their derivation is given. A gradient algorithm was used to solve the biometric problem. An assessment of the weights of the different pairs of parameters in solving the biometric task was made.
Collapse
|
5
|
Mozaffari S, Feroldi F, LaRocca F, Tiruveedhula P, Gregory PD, Park BH, Roorda A. Retinal imaging using adaptive optics optical coherence tomography with fast and accurate real-time tracking. BIOMEDICAL OPTICS EXPRESS 2022; 13:5909-5925. [PMID: 36733754 PMCID: PMC9872892 DOI: 10.1364/boe.467634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/11/2022] [Accepted: 10/04/2022] [Indexed: 05/02/2023]
Abstract
One of the main obstacles in high-resolution 3-D retinal imaging is eye motion, which causes blur and distortion artifacts that require extensive post-processing to be corrected. Here, an adaptive optics optical coherence tomography (AOOCT) system with real-time active eye motion correction is presented. Correction of ocular aberrations and of retinal motion is provided by an adaptive optics scanning laser ophthalmoscope (AOSLO) that is optically and electronically combined with the AOOCT system. We describe the system design and quantify its performance. The AOOCT system features an independent focus adjustment that allows focusing on different retinal layers while maintaining the AOSLO focus on the photoreceptor mosaic for high fidelity active motion correction. The use of a high-quality reference frame for eye tracking increases revisitation accuracy between successive imaging sessions, allowing to collect several volumes from the same area. This system enables spatially targeted retinal imaging as well as volume averaging over multiple imaging sessions with minimal correction of motion in post processing.
Collapse
Affiliation(s)
- Sanam Mozaffari
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Fabio Feroldi
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Francesco LaRocca
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Pavan Tiruveedhula
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Patrick D. Gregory
- Department of Bioengineering, University of California, Riverside, CA 92521, USA
| | - B. Hyle Park
- Department of Bioengineering, University of California, Riverside, CA 92521, USA
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
6
|
Makita S, Azuma S, Mino T, Yamaguchi T, Miura M, Yasuno Y. Extending field-of-view of retinal imaging by optical coherence tomography using convolutional Lissajous and slow scan patterns. BIOMEDICAL OPTICS EXPRESS 2022; 13:5212-5230. [PMID: 36425618 PMCID: PMC9664899 DOI: 10.1364/boe.467563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/27/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) is a high-speed non-invasive cross-sectional imaging technique. Although its imaging speed is high, three-dimensional high-spatial-sampling-density imaging of in vivo tissues with a wide field-of-view (FOV) is challenging. We employed convolved Lissajous and slow circular scanning patterns to extend the FOV of retinal OCT imaging with a 1-µm, 100-kHz-sweep-rate swept-source OCT prototype system. Displacements of sampling points due to eye movements are corrected by post-processing based on a Lissajous scan. Wide FOV three-dimensional retinal imaging with high sampling density and motion correction is achieved. Three-dimensional structures obtained using repeated imaging sessions of a healthy volunteer and two patients showed good agreement. The demonstrated technique will extend the FOV of simple point-scanning OCT, such as commercial ophthalmic OCT devices, without sacrificing sampling density.
Collapse
Affiliation(s)
- Shuichi Makita
- Computational Optics Group,
University of Tsukuba, 1–1–1 Tennodai, Tsukuba, Ibaraki 305–8573, Japan
| | - Shinnosuke Azuma
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Toshihiro Mino
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Tatsuo Yamaguchi
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Masahiro Miura
- Department of Ophthalmology, Tokyo Medical University Ibaraki Medical Center, 3–20–1 Chuo, Ami, Ibaraki 300–0395, Japan
| | - Yoshiaki Yasuno
- Computational Optics Group,
University of Tsukuba, 1–1–1 Tennodai, Tsukuba, Ibaraki 305–8573, Japan
| |
Collapse
|
7
|
Hofmann J, Domdei L, Jainta S, Harmening WM. Assessment of binocular fixational eye movements including cyclotorsion with split-field binocular scanning laser ophthalmoscopy. J Vis 2022; 22:5. [PMID: 36069941 PMCID: PMC9465939 DOI: 10.1167/jov.22.10.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Fixational eye movements are a hallmark of human gaze behavior, yet little is known about how they interact between fellow eyes. Here, we designed, built and validated a split-field binocular scanning laser ophthalmoscope to record high-resolution eye motion traces from both eyes of six observers during fixation in different binocular vergence conditions. In addition to microsaccades and drift, torsional eye motion could be extracted, with a spatial measurement error of less than 1 arcmin. Microsaccades were strongly coupled between fellow eyes under all conditions. No monocular microsaccade occurred and no significant delay between microsaccade onsets across fellow eyes could be detected. Cyclotorsion was also firmly coupled between both eyes, occurring typically in conjugacy, with gradual changes during drift and abrupt changes during saccades.
Collapse
Affiliation(s)
- Julia Hofmann
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany.,Fraunhofer Institute for Optronics, Systems Technologies and Image Exploitations IOSB, Karlsruhe, Germany., https://www.iosb.fraunhofer.de/en.html
| | - Lennart Domdei
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany., https://ao.ukbonn.de/
| | - Stephanie Jainta
- SRH University of Applied Sciences in North Rhine-Westphalia, Hamm, Germany., https://www.srh-hochschule-nrw.de/
| | - Wolf M Harmening
- Rheinische Friedrich-Wilhelms-Universität Bonn, University Eye Hospital, Bonn, Germany., https://ao.ukbonn.de/
| |
Collapse
|
8
|
Hu X, Yang Q. Real-time correction of image rotation with adaptive optics scanning light ophthalmoscopy. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1663-1672. [PMID: 36215635 DOI: 10.1364/josaa.465889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Fixational eye motion includes typical translation and torsion. In the registration of images from adaptive optics scanning light ophthalmoscopy (AOSLO), image rotation due to eye torsion and/or head rotation is often ignored because (a) the amount of rotation is trivial compared to translation within a short duration of imaging or recording time and (b) computational cost increases substantially when the registration algorithm involves simultaneous detection of rotation and translation. However, it becomes critically important under cases such as long exposure, functional measurements, and precise motion tracking. We developed a fast method to detect and correct rotation from AOSLO images, together with the detection of strip-level motion translation. The computational cost for rotation detection and correction alone is about 5 ms/frame (512×512 pixels) on an nVidia GTX960M GPU. Image quality is compared with and without rotation correction from 10 healthy human subjects and 8 diseased eyes with a total of 180 videos. The results show that residual image motions between the reference images and the registered images with rotation correction are a fraction of those without rotation correction, and the ratio is 0.74-0.89 at the image center and 0.37-0.51 at the four corners of the images.
Collapse
|
9
|
Microsaccades, Drifts, Hopf Bundle and Neurogeometry. J Imaging 2022; 8:jimaging8030076. [PMID: 35324631 PMCID: PMC8953095 DOI: 10.3390/jimaging8030076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/03/2022] [Accepted: 03/04/2022] [Indexed: 02/01/2023] Open
Abstract
The first part of the paper contains a short review of the image processing in early vision is static, when the eyes and the stimulus are stable, and in dynamics, when the eyes participate in fixation eye movements. In the second part, we give an interpretation of Donders’ and Listing’s law in terms of the Hopf fibration of the 3-sphere over the 2-sphere. In particular, it is shown that the configuration space of the eye ball (when the head is fixed) is the 2-dimensional hemisphere SL+, called Listing hemisphere, and saccades are described as geodesic segments of SL+ with respect to the standard round metric. We study fixation eye movements (drift and microsaccades) in terms of this model and discuss the role of fixation eye movements in vision. A model of fixation eye movements is proposed that gives an explanation of presaccadic shift of receptive fields.
Collapse
|
10
|
Lu Y, Wang RK. Removing dynamic distortions from laser speckle flowgraphy using Eigen-decomposition and spatial filtering. JOURNAL OF BIOPHOTONICS 2022; 15:e202100294. [PMID: 34787958 DOI: 10.1002/jbio.202100294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 10/29/2021] [Accepted: 11/16/2021] [Indexed: 06/13/2023]
Abstract
Laser speckle flowgraphy (LSFG) has been widely used in the investigation of blood flows in ophthalmology. However, the dynamic changes of the ocular optics can impose artificial contrasts to the LSFG, corrupting the detection of both retinal vasculature and blood pulsation at the posterior segment of the human eye. In this study, we propose to use Eigen-decomposition method to separate the spatially and temporally varying speckle patterns from the static tissues. Spatial filtering is further applied to remove the distortion-correlated modulation of the speckle patterns. We experimentally show that with the proposed method, the integrity of blood vessels is significantly improved and the distortions in pulse waveforms can be well corrected.
Collapse
Affiliation(s)
- Yiming Lu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Ruikang K Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| |
Collapse
|
11
|
Bowers NR, Gautier J, Lin S, Roorda A. Fixational eye movements in passive versus active sustained fixation tasks. J Vis 2021; 21:16. [PMID: 34677574 PMCID: PMC8556553 DOI: 10.1167/jov.21.11.16] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Human fixational eye movements are so small and precise that high-speed, accurate tools are needed to fully reveal their properties and functional roles. Where the fixated image lands on the retina and how it moves for different levels of visually demanding tasks is the subject of the current study. An Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) was used to image, track and present a variety of fixation targets (Maltese cross, disk, concentric circles, Vernier and tumbling-E letter) to healthy subjects. During these different passive (static) or active (discriminating) tasks under natural eye motion, the landing position of the target on the retina was tracked in space and time over the retinal image directly with high spatial (<1 arcmin) and temporal (960 Hz) resolution. We computed both the eye motion and the exact trajectory of the fixated target's motion over the retina. We confirmed that compared to passive tasks, active tasks elicited a partial inhibition of microsaccades, leading to longer drift periods compensated by larger corrective saccades. Consequently, the overall fixation stability during active tasks was on average 57% larger than during passive tasks. The preferred retinal locus of fixation was the same for each task and did not coincide with the location of the peak cone density.
Collapse
Affiliation(s)
- Norick R Bowers
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA.,
| | - Josselin Gautier
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA.,
| | - Samantha Lin
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA.,
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA.,
| |
Collapse
|
12
|
Oculo-retinal dynamics can explain the perception of minimal recognizable configurations. Proc Natl Acad Sci U S A 2021; 118:2022792118. [PMID: 34417308 DOI: 10.1073/pnas.2022792118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Natural vision is a dynamic and continuous process. Under natural conditions, visual object recognition typically involves continuous interactions between ocular motion and visual contrasts, resulting in dynamic retinal activations. In order to identify the dynamic variables that participate in this process and are relevant for image recognition, we used a set of images that are just above and below the human recognition threshold and whose recognition typically requires >2 s of viewing. We recorded eye movements of participants while attempting to recognize these images within trials lasting 3 s. We then assessed the activation dynamics of retinal ganglion cells resulting from ocular dynamics using a computational model. We found that while the saccadic rate was similar between recognized and unrecognized trials, the fixational ocular speed was significantly larger for unrecognized trials. Interestingly, however, retinal activation level was significantly lower during these unrecognized trials. We used retinal activation patterns and oculomotor parameters of each fixation to train a binary classifier, classifying recognized from unrecognized trials. Only retinal activation patterns could predict recognition, reaching 80% correct classifications on the fourth fixation (on average, ∼2.5 s from trial onset). We thus conclude that the information that is relevant for visual perception is embedded in the dynamic interactions between the oculomotor sequence and the image. Hence, our results suggest that ocular dynamics play an important role in recognition and that understanding the dynamics of retinal activation is crucial for understanding natural vision.
Collapse
|
13
|
Abstract
The high power of the eye and optical components used to image it result in "static" distortion, remaining constant across acquired retinal images. In addition, raster-based systems sample points or lines of the image over time, suffering from "dynamic" distortion due to the constant motion of the eye. We recently described an algorithm which corrects for the latter problem but is entirely blind to the former. Here, we describe a new procedure termed "DIOS" (Dewarp Image by Oblique Shift) to remove static distortion of arbitrary type. Much like the dynamic correction method, it relies on locating the same tissue in multiple frames acquired as the eye moves through different gaze positions. Here, the resultant maps of pixel displacement are used to form a sparse system of simultaneous linear equations whose solution gives the common warp seen by all frames. We show that the method successfully handles torsional movement of the eye. We also show that the output of the previously described dynamic correction procedure may be used as input for this new procedure, recovering an image of the tissue that is, in principle, a faithful replica free of any type of distortion. The method could be extended beyond ocular imaging, to any kind of imaging system in which the image can move or be made to move across the detector.
Collapse
Affiliation(s)
- Phillip Bedggood
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Andrew Metha
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
14
|
Young LK, Smithson HE. Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images. Sci Rep 2021; 11:11225. [PMID: 34045507 PMCID: PMC8160341 DOI: 10.1038/s41598-021-90389-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 05/04/2021] [Indexed: 12/13/2022] Open
Abstract
High resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.
Collapse
Affiliation(s)
- Laura K Young
- Biosciences Institute, Newcastle University, Newcastle, NE2 4HH, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
15
|
Sterenczak KA, Winter K, Sperlich K, Stahnke T, Linke S, Farrokhi S, Klemm M, Allgeier S, Köhler B, Reichert KM, Guthoff RF, Bohn S, Stachs O. Morphological characterization of the human corneal epithelium by in vivo confocal laser scanning microscopy. Quant Imaging Med Surg 2021; 11:1737-1750. [PMID: 33936961 DOI: 10.21037/qims-20-1052] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background Regarding the growing interest and importance of understanding the cellular changes of the cornea in diseases, a quantitative cellular characterization of the epithelium is becoming increasingly important. Towards this, the latest research offers considerable improvements in imaging of the cornea by confocal laser scanning microscopy (CLSM). This study presents a pipeline to generate normative morphological data of epithelial cell layers of healthy human corneas. Methods 3D in vivo CLSM was performed on the eyes of volunteers (n=25) with a Heidelberg Retina Tomograph II equipped with an in-house modified version of the Rostock Cornea Module implementing two dedicated piezo actuators and a concave contact cap. Image data were acquired with nearly isotropic voxel resolution. After image registration, stacks of en-face sections were used to generate full-thickness volume data sets of the epithelium. Beyond that, an image analysis algorithm quantified en-face sections of epithelial cells regarding the depth-dependent mean of cell density, area, diameter, aggregation (Clark and Evans index of aggregation), neighbor count and polygonality. Results Imaging and cell segmentation were successfully performed in all subjects. Thereby intermediated cells were efficiently recognized by the segmentation algorithm while efficiency for superficial and basal cells was reduced. Morphological parameters showed an increased mean cell density, decreased mean cell area and mean diameter from anterior to posterior (5,197.02 to 8,190.39 cells/mm2; 160.51 to 90.29 µm2; 15.9 to 12.3 µm respectively). Aggregation gradually increased from anterior to posterior ranging from 1.45 to 1.53. Average neighbor count increased from 5.50 to a maximum of 5.66 followed by a gradual decrease to 5.45 within the normalized depth from anterior to posterior. Polygonality gradually decreased ranging from 4.93 to 4.64 sides of cells. The neighbor count and polygonality parameters exhibited profound depth-dependent changes. Conclusions This in vivo study demonstrates the successful implementation of a CLSM-based imaging pipeline for cellular characterization of the human corneal epithelium. The dedicated hardware in combination with an adapted image registration method to correct the remaining motion-induced image distortions followed by a dedicated algorithm to calculate characteristic quantities of different epithelial cell layers enabled the generation of normative data. Further significant effort is necessary to improve the algorithm for superficial and basal cell segmentation.
Collapse
Affiliation(s)
| | - Karsten Winter
- Institute of Anatomy, Medical Faculty, University of Leipzig, Leipzig, Germany
| | - Karsten Sperlich
- Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany.,Department Life, Light & Matter, University of Rostock, Rostock, Germany
| | - Thomas Stahnke
- Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany.,Department Life, Light & Matter, University of Rostock, Rostock, Germany
| | - Stephan Linke
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Zentrumsehstärke, Hamburg, Germany
| | - Sanaz Farrokhi
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Maren Klemm
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Stephan Allgeier
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Bernd Köhler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Klaus-Martin Reichert
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Rudolf F Guthoff
- Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany.,Department Life, Light & Matter, University of Rostock, Rostock, Germany
| | - Sebastian Bohn
- Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany.,Department Life, Light & Matter, University of Rostock, Rostock, Germany
| | - Oliver Stachs
- Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany.,Department Life, Light & Matter, University of Rostock, Rostock, Germany
| |
Collapse
|
16
|
Abstract
Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal’s spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.
Collapse
|
17
|
Zhang M, Gofas-Salas E, Leonard BT, Rui Y, Snyder VC, Reecher HM, Mecê P, Rossi EA. Strip-based digital image registration for distortion minimization and robust eye motion measurement from scanned ophthalmic imaging systems. BIOMEDICAL OPTICS EXPRESS 2021; 12:2353-2372. [PMID: 33996234 PMCID: PMC8086453 DOI: 10.1364/boe.418070] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/13/2021] [Accepted: 03/16/2021] [Indexed: 05/22/2023]
Abstract
Retinal image-based eye motion measurement from scanned ophthalmic imaging systems, such as scanning laser ophthalmoscopy, has allowed for precise real-time eye tracking at sub-micron resolution. However, the constraints of real-time tracking result in a high error tolerance that is detrimental for some eye motion measurement and imaging applications. We show here that eye motion can be extracted from image sequences when these constraints are lifted, and all data is available at the time of registration. Our approach identifies and discards distorted frames, detects coarse motion to generate a synthetic reference frame and then uses it for fine scale motion tracking with improved sensitivity over a larger area. We demonstrate its application here to tracking scanning laser ophthalmoscopy (TSLO) and adaptive optics scanning light ophthalmoscopy (AOSLO), and show that it can successfully capture most of the eye motion across each image sequence, leaving only between 0.1-3.4% of non-blink frames untracked, while simultaneously minimizing image distortions induced from eye motion. These improvements will facilitate precise measurement of fixational eye movements (FEMs) in TSLO and longitudinal tracking of individual cells in AOSLO.
Collapse
Affiliation(s)
- Min Zhang
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Denotes that each of these authors contributed equally to this work
| | - Elena Gofas-Salas
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Denotes that each of these authors contributed equally to this work
| | - Bianca T Leonard
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Yuhua Rui
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Eye center of Xiangya Hospital, Central South University; Hunan Key Laboratory of Ophthalmology; Changsha, Hunan 410008, China
| | - Valerie C Snyder
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Hope M Reecher
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Pedro Mecê
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Ethan A Rossi
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
- Department of Bioengineering, University of Pittsburgh Swanson School of Engineering, Pittsburgh, PA 15261, USA
- McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
18
|
Abstract
The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
Collapse
|
19
|
Li Z, Pandiyan VP, Maloney-Bertelli A, Jiang X, Li X, Sabesan R. Correcting intra-volume distortion for AO-OCT using 3D correlation based registration. OPTICS EXPRESS 2020; 28:38390-38409. [PMID: 33379652 PMCID: PMC7771894 DOI: 10.1364/oe.410374] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/15/2020] [Accepted: 11/19/2020] [Indexed: 05/18/2023]
Abstract
Adaptive optics (AO) based ophthalmic imagers, such as scanning laser ophthalmoscopes (SLO) and optical coherence tomography (OCT), are used to evaluate the structure and function of the retina with high contrast and resolution. Fixational eye movements during a raster-scanned image acquisition lead to intra-frame and intra-volume distortion, resulting in an inaccurate reproduction of the underlying retinal structure. For three-dimensional (3D) AO-OCT, segmentation-based and 3D correlation based registration methods have been applied to correct eye motion and achieve a high signal-to-noise ratio registered volume. This involves first selecting a reference volume, either manually or automatically, and registering the image/volume stream against the reference using correlation methods. However, even within the chosen reference volume, involuntary eye motion persists and affects the accuracy with which the 3D retinal structure is finally rendered. In this article, we introduced reference volume distortion correction for AO-OCT using 3D correlation based registration and demonstrate a significant improvement in registration performance via a few metrics. Conceptually, the general paradigm follows that developed previously for intra-frame distortion correction for 2D raster-scanned images, as in an AOSLO, but extended here across all three spatial dimensions via 3D correlation analyses. We performed a frequency analysis of eye motion traces before and after intra-volume correction and revealed how periodic artifacts in eye motion estimates are effectively reduced upon correction. Further, we quantified how the intra-volume distortions and periodic artifacts in the eye motion traces, in general, decrease with increasing AO-OCT acquisition speed. Overall, 3D correlation based registration with intra-volume correction significantly improved the visualization of retinal structure and estimation of fixational eye movements.
Collapse
Affiliation(s)
- Zhenghan Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
- These authors contributed equally to this work
| | - Vimal Prabhu Pandiyan
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
- These authors contributed equally to this work
| | | | - Xiaoyun Jiang
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
| | - Xinyang Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
| | - Ramkumar Sabesan
- Department of Ophthalmology, University of Washington, Seattle, Washington 98109, USA
| |
Collapse
|