1
|
Moon B, Linebach G, Yang A, Jenks SK, Rucci M, Poletti M, Rolland JP. High refresh rate display for natural monocular viewing in AOSLO psychophysics experiments. OPTICS EXPRESS 2024; 32:31142-31161. [PMID: 39573257 PMCID: PMC11595291 DOI: 10.1364/oe.529199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/31/2024] [Accepted: 07/31/2024] [Indexed: 11/27/2024]
Abstract
By combining an external display operating at 360 frames per second with an adaptive optics scanning laser ophthalmoscope (AOSLO) for human foveal imaging, we demonstrate color stimulus delivery at high spatial and temporal resolution in AOSLO psychophysics experiments. A custom pupil relay enables viewing of the stimulus through a 3-mm effective pupil diameter and provides refractive error correction from -8 to +4 diopters. Performance of the assembled and aligned pupil relay was validated by measuring the wavefront error across the field of view and correction range, and the as-built Strehl ratio was 0.64 or better. High-acuity stimuli were rendered on the external display and imaged through the pupil relay to demonstrate that spatial frequencies up to 54 cycles per degree, corresponding to 20/11 visual acuity, are resolved. The completed external display was then used to render fixation markers across the field of view of the monitor, and a continuous retinal montage spanning 9.4 by 5.4 degrees of visual angle was acquired with the AOSLO. We conducted eye-tracking experiments during free-viewing and high-acuity tasks with polychromatic images presented on the external display. Sub-arcminute eye position uncertainty was achieved over a 1.5 by 1.5-degree trackable range, enabling precise localization of the line of sight on the stimulus while simultaneously imaging the fine structure of the human central fovea. This high refresh rate display overcomes the temporal, spectral, and field of view limitations of AOSLO-based stimulus presentation, enabling natural monocular viewing of stimuli in psychophysics experiments conducted with AOSLO.
Collapse
Affiliation(s)
- Benjamin Moon
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Glory Linebach
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Angelina Yang
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Samantha K. Jenks
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Michele Rucci
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Martina Poletti
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Department of Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Jannick P. Rolland
- The Institute of Optics, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
2
|
Csoba I, Kunkli R. Rendering algorithms for aberrated human vision simulation. Vis Comput Ind Biomed Art 2023; 6:5. [PMID: 36930412 PMCID: PMC10023823 DOI: 10.1186/s42492-023-00132-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 03/06/2023] [Indexed: 03/18/2023] Open
Abstract
Vision-simulated imagery-the process of generating images that mimic the human visual system-is a valuable tool with a wide spectrum of possible applications, including visual acuity measurements, personalized planning of corrective lenses and surgeries, vision-correcting displays, vision-related hardware development, and extended reality discomfort reduction. A critical property of human vision is that it is imperfect because of the highly influential wavefront aberrations that vary from person to person. This study provides an overview of the existing computational image generation techniques that properly simulate human vision in the presence of wavefront aberrations. These algorithms typically apply ray tracing with a detailed description of the simulated eye or utilize the point-spread function of the eye to perform convolution on the input image. Based on the description of the vision simulation techniques, several of their characteristic features have been evaluated and some potential application areas and research directions have been outlined.
Collapse
Affiliation(s)
- István Csoba
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary. .,Doctoral School of Informatics, University of Debrecen, Debrecen 4028, Hungary.
| | - Roland Kunkli
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary
| |
Collapse
|
3
|
Goossens T, Lyu Z, Ko J, Wan GC, Farrell J, Wandell B. Ray-transfer functions for camera simulation of 3D scenes with hidden lens design. OPTICS EXPRESS 2022; 30:24031-24047. [PMID: 36225073 DOI: 10.1364/oe.457496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/06/2022] [Indexed: 06/16/2023]
Abstract
Combining image sensor simulation tools with physically based ray tracing enables the design and evaluation (soft prototyping) of novel imaging systems. These methods can also synthesize physically accurate, labeled images for machine learning applications. One practical limitation of soft prototyping has been simulating the optics precisely: lens manufacturers generally prefer to keep lens design confidential. We present a pragmatic solution to this problem using a black box lens model in Zemax; such models provide necessary optical information while preserving the lens designer's intellectual property. First, we describe and provide software to construct a polynomial ray transfer function that characterizes how rays entering the lens at any position and angle subsequently exit the lens. We implement the ray-transfer calculation as a camera model in PBRT and confirm that the PBRT ray-transfer calculations match the Zemax lens calculations for edge spread functions and relative illumination.
Collapse
|
4
|
Wandell BA, Brainard DH, Cottaris NP. Visual encoding: Principles and software. PROGRESS IN BRAIN RESEARCH 2022; 273:199-229. [DOI: 10.1016/bs.pbr.2022.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
5
|
Cottaris NP, Wandell BA, Rieke F, Brainard DH. A computational observer model of spatial contrast sensitivity: Effects of photocurrent encoding, fixational eye movements, and inference engine. J Vis 2021; 20:17. [PMID: 32692826 PMCID: PMC7424933 DOI: 10.1167/jov.20.7.17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We have recently shown that the relative spatial contrast sensitivity function (CSF) of a computational observer operating on the cone mosaic photopigment excitations of a stationary retina has the same shape as human subjects. Absolute human sensitivity, however, is 5- to 10-fold lower than the computational observer. Here we model how additional known features of early vision affect the CSF: fixational eye movements and the conversion of cone photopigment excitations to cone photocurrents (phototransduction). For a computational observer that uses a linear classifier applied to the responses of a stimulus-matched linear filter, fixational eye movements substantially change the shape of the CSF by reducing sensitivity above 10 c/deg. For a translation-invariant computational observer that operates on the squared response of a quadrature-pair of linear filters, the CSF shape is little changed by eye movements, but there is a two fold reduction in sensitivity. Phototransduction dynamics introduce an additional two fold sensitivity decrease. Hence, the combined effects of fixational eye movements and phototransduction bring the absolute CSF of the translation-invariant computational observer to within a factor of 1 to 2 of the human CSF. We note that the human CSF depends on processing of the retinal representation by many thalamo-cortical neurons, which are individually quite noisy. Our modeling suggests that the net effect of post-retinal noise on contrast-detection performance, when considered at the neural population and behavioral level, is quite small: The inference mechanisms that determine the CSF, presumably in cortex, make efficient use of the information carried by the cone photocurrents of the fixating eye.
Collapse
|
6
|
Xu M, Huang H, Hua H. Analytical model for the perceived retinal image formation of 3D display systems. OPTICS EXPRESS 2020; 28:38029-38048. [PMID: 33379624 DOI: 10.1364/oe.408585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
The optical design process of conventional stereoscope-type head mounted displays for virtual and augmented reality applications typically neglects the inherent aberrations of the eye optics or refractive errors of a viewer, which misses the opportunity of producing personal devices for optimal visual experiences. Although a few research efforts have been made to simulate the retinal image formation process for some of the emerging 3D display systems such as light field displays that require modeling the eye optics to complete the image formation process, the existing works generally are specific for one type of display methods, unable to provide a generalized framework for different display methods for the benefit of comparison, and often require the use of at least two different software platforms for implementation which is challenging in handling massive data and implementing compensation of wavefront aberrations induced by display engine or eye refractive errors. To overcome those limits, we present a generalized analytical model for accurately simulating the visual responses such as retinal PSF, MTF, and image formation of different types of 2D and 3D display systems. This analytical model can accurately simulate the retinal responses when viewing a given display system, accounting for the residual eye aberrations of schematic eye models that match with the statistical clinical measurements, eye accommodative change as required, the effects of different eye refractive errors specific to viewers, and the effects of various wavefront aberrations inherited from a display engine. We further describe the numerical implementation of this analytical model for simulating the perceived retinal image with different types of HMD systems in a single computational platform. Finally, with a test setup, we numerically demonstrated the application of this analytical model in the simulation of the perceived retinal image, accommodative response and in the investigation of the eye refractive error impacts on the perceived retinal image based on the multifocal plane display, integral imaging based light field display, computational multilayer light field display, as well as the stereoscope and natural viewing for comparison.
Collapse
|