1
|
de Ruyter van Steveninck J, Nipshagen M, van Gerven M, Güçlü U, Güçlüturk Y, van Wezel R. Gaze-contingent processing improves mobility, scene recognition and visual search in simulated head-steered prosthetic vision. J Neural Eng 2024; 21:026037. [PMID: 38502957 DOI: 10.1088/1741-2552/ad357d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/19/2024] [Indexed: 03/21/2024]
Abstract
Objective.The enabling technology of visual prosthetics for the blind is making rapid progress. However, there are still uncertainties regarding the functional outcomes, which can depend on many design choices in the development. In visual prostheses with a head-mounted camera, a particularly challenging question is how to deal with the gaze-locked visual percept associated with spatial updating conflicts in the brain. The current study investigates a recently proposed compensation strategy based on gaze-contingent image processing with eye-tracking. Gaze-contingent processing is expected to reinforce natural-like visual scanning and reestablished spatial updating based on eye movements. The beneficial effects remain to be investigated for daily life activities in complex visual environments.Approach.The current study evaluates the benefits of gaze-contingent processing versus gaze-locked and gaze-ignored simulations in the context of mobility, scene recognition and visual search, using a virtual reality simulated prosthetic vision paradigm with sighted subjects.Main results.Compared to gaze-locked vision, gaze-contingent processing was consistently found to improve the speed in all experimental tasks, as well as the subjective quality of vision. Similar or further improvements were found in a control condition that ignores gaze-dependent effects, a simulation that is unattainable in the clinical reality.Significance.Our results suggest that gaze-locked vision and spatial updating conflicts can be debilitating for complex visually-guided activities of daily living such as mobility and orientation. Therefore, for prospective users of head-steered prostheses with an unimpaired oculomotor system, the inclusion of a compensatory eye-tracking system is strongly endorsed.
Collapse
Affiliation(s)
| | - Mo Nipshagen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Umut Güçlü
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Yağmur Güçlüturk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Richard van Wezel
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, The Netherlands
| |
Collapse
|
2
|
Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision. Sci Rep 2021; 11:11121. [PMID: 34045485 PMCID: PMC8160142 DOI: 10.1038/s41598-021-86996-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 03/23/2021] [Indexed: 11/08/2022] Open
Abstract
The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.
Collapse
|
3
|
Paraskevoudi N, Pezaris JS. Eye Movement Compensation and Spatial Updating in Visual Prosthetics: Mechanisms, Limitations and Future Directions. Front Syst Neurosci 2019; 12:73. [PMID: 30774585 PMCID: PMC6368147 DOI: 10.3389/fnsys.2018.00073] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/21/2018] [Indexed: 01/01/2023] Open
Abstract
Despite appearing automatic and effortless, perceiving the visual world is a highly complex process that depends on intact visual and oculomotor function. Understanding the mechanisms underlying spatial updating (i.e., gaze contingency) represents an important, yet unresolved issue in the fields of visual perception and cognitive neuroscience. Many questions regarding the processes involved in updating visual information as a function of the movements of the eyes are still open for research. Beyond its importance for basic research, gaze contingency represents a challenge for visual prosthetics as well. While most artificial vision studies acknowledge its importance in providing accurate visual percepts to the blind implanted patients, the majority of the current devices do not compensate for gaze position. To-date, artificial percepts to the blind population have been provided either by intraocular light-sensing circuitry or by using external cameras. While the former commonly accounts for gaze shifts, the latter requires the use of eye-tracking or similar technology in order to deliver percepts based on gaze position. Inspired by the need to overcome the hurdle of gaze contingency in artificial vision, we aim to provide a thorough overview of the research addressing the neural underpinnings of eye compensation, as well as its relevance in visual prosthetics. The present review outlines what is currently known about the mechanisms underlying spatial updating and reviews the attempts of current visual prosthetic devices to overcome the hurdle of gaze contingency. We discuss the limitations of the current devices and highlight the need to use eye-tracking methodology in order to introduce gaze-contingent information to visual prosthetics.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Brainlab – Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - John S. Pezaris
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States
- Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Troncoso XG, McCamy MB, Jazi AN, Cui J, Otero-Millan J, Macknik SL, Costela FM, Martinez-Conde S. V1 neurons respond differently to object motion versus motion from eye movements. Nat Commun 2015; 6:8114. [PMID: 26370518 PMCID: PMC4579399 DOI: 10.1038/ncomms9114] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Accepted: 07/21/2015] [Indexed: 11/10/2022] Open
Abstract
How does the visual system differentiate self-generated motion from motion in the external world? Humans can discern object motion from identical retinal image displacements induced by eye movements, but the brain mechanisms underlying this ability are unknown. Here we exploit the frequent production of microsaccades during ocular fixation in the primate to compare primary visual cortical responses to self-generated motion (real microsaccades) versus motion in the external world (object motion mimicking microsaccades). Real and simulated microsaccades were randomly interleaved in the same viewing condition, thereby producing equivalent oculomotor and behavioural engagement. Our results show that real microsaccades generate biphasic neural responses, consisting of a rapid increase in the firing rate followed by a slow and smaller-amplitude suppression that drops below baseline. Simulated microsaccades generate solely excitatory responses. These findings indicate that V1 neurons can respond differently to internally and externally generated motion, and expand V1's potential role in information processing and visual stability during eye movements. A key question in neuroscience is understanding how the brain distinguishes self-generated motion from motion in the external world. Here the authors demonstrate that the response of primary visual cortical neurons to a moving stimulus depends on whether the motion was self- or externally generated.
Collapse
Affiliation(s)
- Xoana G Troncoso
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,UNIC-CNRS (Unité de Neuroscience Information et Complexité, Centre National de la Recherche Scientifique), 1 Avenue de la Terrase, 91198 Gif-sur-Yvette, France
| | - Michael B McCamy
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA
| | - Ali Najafian Jazi
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Program in Neuroscience, Arizona State University, PO Box 874601, Tempe, Arizona 85287, USA
| | - Jie Cui
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA
| | - Jorge Otero-Millan
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Department of Neurology, Johns Hopkins University, 600 N Wolfe Street, Baltimore, Maryland 21287, USA
| | - Stephen L Macknik
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,State University of New York (SUNY) Downstate Medical Center, 450 Clarkson Avenue, Brooklyn, New York 11203, USA
| | - Francisco M Costela
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,Program in Neuroscience, Arizona State University, PO Box 874601, Tempe, Arizona 85287, USA
| | - Susana Martinez-Conde
- Barrow Neurological Institute, 350 W Thomas Road, Phoenix, Arizona 85013, USA.,State University of New York (SUNY) Downstate Medical Center, 450 Clarkson Avenue, Brooklyn, New York 11203, USA
| |
Collapse
|
5
|
Howe PDL, Drew T, Pinto Y, Horowitz TS. Remapping attention in multiple object tracking. Vision Res 2011; 51:489-95. [PMID: 21236290 PMCID: PMC3056938 DOI: 10.1016/j.visres.2011.01.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2010] [Revised: 12/30/2010] [Accepted: 01/04/2011] [Indexed: 11/16/2022]
Abstract
Which coordinate system do we use to track moving objects? In a previous study using smooth pursuit eye movements, we argued that targets are tracked in both retinal (retinotopic) and scene-centered (allocentric) coordinates (Howe, Pinto, & Horowitz, 2010). However, multiple object tracking typically also elicits saccadic eye movements, which may change how object locations are represented. Observers fixated a cross while tracking three targets out of six identical disks confined to move within an imaginary square. The fixation cross alternated between two locations, requiring observers to make repeated saccades. By moving (or not moving) the imaginary square in sync with the fixation cross, we could disrupt either (or both) coordinate systems. Surprisingly, tracking performance was much worse when the objects moved with the fixation cross, although this manipulation preserved the retinal image across saccades, thereby avoiding the visual disruptions normally associated with saccades. Instead, tracking performance was best when the allocentric coordinate system was preserved, suggesting that targets locations are maintained in that coordinate system across saccades. This is consistent with a theoretical framework in which the positions of a small set of attentional pointers are predictively updated in advance of a saccade.
Collapse
|
6
|
Toscani M, Marzi T, Righi S, Viggiano MP, Baldassi S. Alpha waves: a neural signature of visual suppression. Exp Brain Res 2010; 207:213-9. [PMID: 20972777 DOI: 10.1007/s00221-010-2444-7] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2010] [Accepted: 09/09/2010] [Indexed: 11/30/2022]
Abstract
Alpha waves are traditionally considered a passive consequence of the lack of stimulation of sensory areas. However, recent results have challenged this view by showing a modulation of alpha activity in cortical areas representing unattended information during active tasks. These data have led us to think that alpha waves would support a 'gating function' on sensorial stimulation that actively inhibits unattended information in attentional tasks. Visual suppression occurring during a saccade and blink entails an inhibition of incoming visual information, and it seems to occur at an early processing stage. In this study, we hypothesized that the neural mechanism through which the visual system exerts this inhibition is the active imposition of alpha oscillations in the occipital cortex, which in turn predicts an increment of alpha amplitude during a visual suppression phenomena. We measured visual suppression occurring during short closures of the eyelids, a situation well suited for EEG recordings and stimulated the retinae with an intra-oral light administered through the palate. In the behavioral experiment, detection thresholds were measured with eyes steady open and steady closed, showing a reduction of sensitivity in the latter case. In the EEG recordings performed under identical conditions we found stronger alpha activity with closed eyes. Since the stimulation does not depend on whether the eyes were open or closed, we reasoned that this should be a central effect, probably due to a functional role of alpha oscillation in agreement with the 'gating function' theory.
Collapse
Affiliation(s)
- Matteo Toscani
- Abteilung Allgemeine Psychologie, Justus-Liebig-Universitat, Otto-Behaghel-Str. 10, 35394, Giessen, Germany
| | | | | | | | | |
Collapse
|
7
|
Retinotopy of the face aftereffect. Vision Res 2008; 48:42-54. [PMID: 18078975 DOI: 10.1016/j.visres.2007.10.028] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2007] [Revised: 10/05/2007] [Accepted: 10/10/2007] [Indexed: 11/23/2022]
Abstract
Physiological results for the size of face-specific units in inferotemporal cortex (IT) support an extraordinarily large range of possible sizes--from 2.5 degrees to 30 degrees or more. We use a behavioral test of face-specific aftereffects to measure the face analysis regions and find a coarse retinotopy consistent with receptive fields of intermediate size (10 degrees -12 degrees at 3 degrees eccentricity). In the first experiment, observers were adapted to a single face at 3 degrees from fixation. A test (a morph of the face and its anti-face) was then presented at different locations around fixation and subjects classified it as face or anti-face. The face aftereffect (FAE) was not constant at all test locations--it dropped to half its maximum value for tests 5 degrees from the adapting location. Simultaneous adaptation to both a face and its anti-face, placed at opposite locations across fixation, produced two separate regions of opposite aftereffects. However, with four stimuli, faces alternating with anti-faces equally spaced around fixation, the FAE was greatly reduced at all locations, implying a fairly coarse localization of the aftereffect. In the second experiment, observers adapted to a face and its anti-face presented either simultaneously or in alternation. Results showed that the simultaneous presentation of a face and its anti-face leads to stronger FAEs than sequential presentation, suggesting that face processing has a dynamic nature and its region of analysis is sharpened when there is more than one face in the scene. In the final experiment, a face and two anti-face flankers with different spatial offsets were presented during adaptation and the FAE was measured at the face location. Results showed that FAE at the face location was inhibited more as the distance of anti-face flankers to the face stimulus was reduced. This confirms the spatial extent of face analysis regions in a test with a fixed number of stimuli where only distance varied.
Collapse
|
8
|
|
9
|
Hafed ZM, Krauzlis RJ. Ongoing eye movements constrain visual perception. Nat Neurosci 2006; 9:1449-57. [PMID: 17028586 DOI: 10.1038/nn1782] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2006] [Accepted: 09/13/2006] [Indexed: 11/08/2022]
Abstract
Eye movements markedly change the pattern of retinal stimulation. To maintain stable vision, the brain possesses a variety of mechanisms that compensate for the retinal consequences of eye movements. However, eye movements may also be important for resolving the ambiguities often posed by visual inputs, because motor commands contain additional spatial information that is necessarily absent from retinal signals. To test this possibility, we used a perceptually ambiguous stimulus composed of four line segments, consistent with a shape whose vertices were occluded. In a passive condition, subjects fixated a spot while the shape translated along a certain trajectory. In several active conditions, the spot, occluder and shape translated such that when subjects tracked the spot, they experienced the same retinal stimulus as during fixation. We found that eye movements significantly promoted perceptual coherence compared to fixation. These results indicate that eye movement information constrains the perceptual interpretation of visual inputs.
Collapse
Affiliation(s)
- Ziad M Hafed
- Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, California 92037, USA.
| | | |
Collapse
|
10
|
Karmeier K, van Hateren JH, Kern R, Egelhaaf M. Encoding of Naturalistic Optic Flow by a Population of Blowfly Motion-Sensitive Neurons. J Neurophysiol 2006; 96:1602-14. [PMID: 16687623 DOI: 10.1152/jn.00023.2006] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In sensory systems information is encoded by the activity of populations of neurons. To analyze the coding properties of neuronal populations sensory stimuli have usually been used that were much simpler than those encountered in real life. It has been possible only recently to stimulate visual interneurons of the blowfly with naturalistic visual stimuli reconstructed from eye movements measured during free flight. Therefore we now investigate with naturalistic optic flow the coding properties of a small neuronal population of identified visual interneurons in the blowfly, the so-called VS and HS neurons. These neurons are motion sensitive and directionally selective and are assumed to extract information about the animal's self-motion from optic flow. We could show that neuronal responses of VS and HS neurons are mainly shaped by the characteristic dynamical properties of the fly's saccadic flight and gaze strategy. Individual neurons encode information about both the rotational and the translational components of the animal's self-motion. Thus the information carried by individual neurons is ambiguous. The ambiguities can be reduced by considering neuronal population activity. The joint responses of different subpopulations of VS and HS neurons can provide unambiguous information about the three rotational and the three translational components of the animal's self-motion and also, indirectly, about the three-dimensional layout of the environment.
Collapse
Affiliation(s)
- K Karmeier
- Department of Neurobiology, Faculty for Biology, Bielefeld University, Bielefeld, Germany
| | | | | | | |
Collapse
|
11
|
Bruno A, Brambati SM, Perani D, Morrone MC. Development of saccadic suppression in children. J Neurophysiol 2006; 96:1011-7. [PMID: 16407425 DOI: 10.1152/jn.01179.2005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We measured saccadic suppression in adolescent children and young adults using spatially curtailed low spatial frequency stimuli. For both groups, sensitivity for color-modulated stimuli was unchanged during saccades. Sensitivity for luminance-modulated stimuli was greatly reduced during saccades in both groups but far more for adolescents than for young adults. Adults' suppression was on average a factor of about 3, whereas that for the adolescent group was closer to a factor of 10. The specificity of the suppression to luminance-modulated stimuli excludes generic explanations such as task difficulty and attention. We suggest that the enhanced suppression in adolescents results from the immaturity of the ocular-motor system at that age.
Collapse
|
12
|
Tcheang L, Gilson SJ, Glennerster A. Systematic distortions of perceptual stability investigated using immersive virtual reality. Vision Res 2005; 45:2177-89. [PMID: 15845248 PMCID: PMC2833395 DOI: 10.1016/j.visres.2005.02.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2004] [Revised: 02/02/2005] [Accepted: 02/02/2005] [Indexed: 11/28/2022]
Abstract
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space.
Collapse
Affiliation(s)
- Lili Tcheang
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | - Stuart J. Gilson
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | | |
Collapse
|
13
|
Lee J, Lee C. Changes in visual motion perception before saccadic eye movements. Vision Res 2005; 45:1447-57. [PMID: 15743614 DOI: 10.1016/j.visres.2004.12.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2004] [Revised: 12/07/2004] [Accepted: 12/20/2004] [Indexed: 11/26/2022]
Abstract
Execution of a saccadic eye movement influences subsequent motion perception [Park, J., Lee, J., & Lee, C. (2001). Non-veridical visual motion perception immediately after saccades. Vision Research, 41, 3751-3761]. In the current study, we determined the pattern of perceptual changes for visual motion presented before saccades. The accuracy of judging the direction of a moving target was variable depending on the direction of target motion. Based on the pattern of judgment errors, the direction associated with no error, or DNE, could be defined. When a moving target was seen by stationary eyes, the DNE was roughly vertical, and the perceptual judgment for adjacent directions was biased away from the vertical direction. When the same visual motion was seen before horizontal saccades, the DNE shifted in the direction of the impending saccade, and the perceptual judgment of adjacent directions was shifted away from the new DNE, thus, shifting the perceived direction of the vertical in the direction opposite to the saccade. These changes improved the accuracy of direction judgment for visual motion in the visual field ipsiversive to impending saccades. In addition to shift of the DNE, perceptual judgment for oblique directions became near veridical before saccades, which we call the anti-oblique effect. These results suggest that motion perception is dynamically and anisotropically modulated at the time of saccades, and the DNE shift may be a part of processes dynamically reallocating computational resources, improving perceptual performance in advance for sensory events to be acquired by impending saccades.
Collapse
Affiliation(s)
- Jungah Lee
- Department of Psychology, Seoul National University, Kwanak, Seoul 151-742, Republic of Korea
| | | |
Collapse
|