1
|
Topographic organization of eye-position dependent gain fields in human visual cortex. Nat Commun 2022; 13:7925. [PMID: 36564372 PMCID: PMC9789150 DOI: 10.1038/s41467-022-35488-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/06/2022] [Indexed: 12/25/2022] Open
Abstract
The ability to move has introduced animals with the problem of sensory ambiguity: the position of an external stimulus could change over time because the stimulus moved, or because the animal moved its receptors. This ambiguity can be resolved with a change in neural response gain as a function of receptor orientation. Here, we developed an encoding model to capture gain modulation of visual responses in high field (7 T) fMRI data. We characterized population eye-position dependent gain fields (pEGF). The information contained in the pEGFs allowed us to reconstruct eye positions over time across the visual hierarchy. We discovered a systematic distribution of pEGF centers: pEGF centers shift from contra- to ipsilateral following pRF eccentricity. Such a topographical organization suggests that signals beyond pure retinotopy are accessible early in the visual hierarchy, providing the potential to solve sensory ambiguity and optimize sensory processing information for functionally relevant behavior.
Collapse
|
2
|
Parker PRL, Abe ETT, Leonard ESP, Martins DM, Niell CM. Joint coding of visual input and eye/head position in V1 of freely moving mice. Neuron 2022; 110:3897-3906.e5. [PMID: 36137549 PMCID: PMC9742335 DOI: 10.1016/j.neuron.2022.08.029] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/16/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022]
Abstract
Visual input during natural behavior is highly dependent on movements of the eyes and head, but how information about eye and head position is integrated with visual processing during free movement is unknown, as visual physiology is generally performed under head fixation. To address this, we performed single-unit electrophysiology in V1 of freely moving mice while simultaneously measuring the mouse's eye position, head orientation, and the visual scene from the mouse's perspective. From these measures, we mapped spatiotemporal receptive fields during free movement based on the gaze-corrected visual input. Furthermore, we found a significant fraction of neurons tuned for eye and head position, and these signals were integrated with visual responses through a multiplicative mechanism in the majority of modulated neurons. These results provide new insight into coding in the mouse V1 and, more generally, provide a paradigm for investigating visual physiology under natural conditions, including active sensing and ethological behavior.
Collapse
Affiliation(s)
- Philip R L Parker
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Elliott T T Abe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Emmalyn S P Leonard
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Dylan M Martins
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Cristopher M Niell
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
3
|
Benucci A. Motor-related signals support localization invariance for stable visual perception. PLoS Comput Biol 2022; 18:e1009928. [PMID: 35286305 PMCID: PMC8947590 DOI: 10.1371/journal.pcbi.1009928] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 03/24/2022] [Accepted: 02/16/2022] [Indexed: 11/19/2022] Open
Abstract
Our ability to perceive a stable visual world in the presence of continuous movements of the body, head, and eyes has puzzled researchers in the neuroscience field for a long time. We reformulated this problem in the context of hierarchical convolutional neural networks (CNNs)-whose architectures have been inspired by the hierarchical signal processing of the mammalian visual system-and examined perceptual stability as an optimization process that identifies image-defining features for accurate image classification in the presence of movements. Movement signals, multiplexed with visual inputs along overlapping convolutional layers, aided classification invariance of shifted images by making the classification faster to learn and more robust relative to input noise. Classification invariance was reflected in activity manifolds associated with image categories emerging in late CNN layers and with network units acquiring movement-associated activity modulations as observed experimentally during saccadic eye movements. Our findings provide a computational framework that unifies a multitude of biological observations on perceptual stability under optimality principles for image classification in artificial neural networks.
Collapse
Affiliation(s)
- Andrea Benucci
- RIKEN Center for Brain Science, Wako-shi, Japan
- University of Tokyo, Graduate School of Information Science and Technology, Department of Mathematical Informatics, Tokyo, Japan
| |
Collapse
|
4
|
Cottereau BR, Trotter Y, Durand JB. An egocentric straight-ahead bias in primate's vision. Brain Struct Funct 2021; 226:2897-2909. [PMID: 34120262 PMCID: PMC8541962 DOI: 10.1007/s00429-021-02314-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/04/2021] [Indexed: 12/23/2022]
Abstract
As we plan to reach or manipulate objects, we generally orient our body so as to face them. Other objects occupying the same portion of space will likely represent potential obstacles for the intended action. Thus, either as targets or as obstacles, the objects located straight in front of us are often endowed with a special behavioral status. Here, we review a set of recent electrophysiological, imaging and behavioral studies bringing converging evidence that the objects which lie straight-ahead are subject to privileged visual processing. More precisely, these works collectively demonstrate that when gaze steers central vision away from the straight-ahead direction, the latter is still prioritized in peripheral vision. Straight-ahead objects evoke (1) stronger neuronal responses in macaque peripheral V1 neurons, (2) stronger EEG and fMRI activations across the human visual cortex and (3) faster reactive hand and eye movements. Here, we discuss the functional implications and underlying mechanisms behind this phenomenon. Notably, we propose that it can be considered as a new type of visuospatial attentional mechanism, distinct from the previously documented classes of endogenous and exogenous attention.
Collapse
Affiliation(s)
- Benoit R Cottereau
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France. .,Centre National de La Recherche Scientifique, 31055, Toulouse, France.
| | - Yves Trotter
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France.,Centre National de La Recherche Scientifique, 31055, Toulouse, France
| | - Jean-Baptiste Durand
- Centre de Recherche Cerveau Et Cognition, Université de Toulouse, 31052, Toulouse, France.,Centre National de La Recherche Scientifique, 31055, Toulouse, France
| |
Collapse
|
5
|
Mallory CS, Hardcastle K, Campbell MG, Attinger A, Low IIC, Raymond JL, Giocomo LM. Mouse entorhinal cortex encodes a diverse repertoire of self-motion signals. Nat Commun 2021; 12:671. [PMID: 33510164 PMCID: PMC7844029 DOI: 10.1038/s41467-021-20936-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 12/31/2020] [Indexed: 01/30/2023] Open
Abstract
Neural circuits generate representations of the external world from multiple information streams. The navigation system provides an exceptional lens through which we may gain insights about how such computations are implemented. Neural circuits in the medial temporal lobe construct a map-like representation of space that supports navigation. This computation integrates multiple sensory cues, and, in addition, is thought to require cues related to the individual's movement through the environment. Here, we identify multiple self-motion signals, related to the position and velocity of the head and eyes, encoded by neurons in a key node of the navigation circuitry of mice, the medial entorhinal cortex (MEC). The representation of these signals is highly integrated with other cues in individual neurons. Such information could be used to compute the allocentric location of landmarks from visual cues and to generate internal representations of space.
Collapse
Affiliation(s)
- Caitlin S Mallory
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Kiah Hardcastle
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Malcolm G Campbell
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Alexander Attinger
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Isabel I C Low
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Jennifer L Raymond
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
6
|
Whitwell RL, Sperandio I, Buckingham G, Chouinard PA, Goodale MA. Grip Constancy but Not Perceptual Size Constancy Survives Lesions of Early Visual Cortex. Curr Biol 2020; 30:3680-3686.e5. [PMID: 32735814 DOI: 10.1016/j.cub.2020.07.026] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 06/27/2020] [Accepted: 07/08/2020] [Indexed: 01/06/2023]
Abstract
Object constancies are central constructs in theories of visual phenomenology. A powerful example is "size constancy," in which the perceived size of an object remains stable despite changes in viewing distance [1-4]. Evidence from neuropsychology [5], neuroimaging [6-11], transcranial magnetic stimulation [12, 13], single-unit and lesion studies in monkey [14-20], and computational modeling [21] suggests that re-entrant processes involving reciprocal interactions between primary visual cortex (V1) and extrastriate visual areas [22-26] play an essential role in mediating size constancy. It is seldom appreciated, however, that object constancies must also operate for the visual guidance of goal-directed action. For example, when reaching out to pick up an object, the hand's in-flight aperture scales with size of the goal object [27-30] and is refractory to the decrease in retinal-image size with increased viewing distance [31-41] (Figure 1), a phenomenon we call "grip constancy." Does grip constancy, like perceptual constancy, depend on V1 or can it be mediated by pathways that bypass it altogether? We tested these possibilities in an individual, M.C., who has bilateral lesions encompassing V1 and much of the ventral visual stream. We show that her perceptual estimates of object size co-vary with retinal-image size rather than real-world size as viewing distance varies. In contrast, M.C. shows near-normal scaling of in-flight grasp aperture to object size despite changes in viewing distance. Thus, although early visual cortex is necessary for perceptual object constancy, it is unnecessary for grip constancy, which is mediated instead by separate visual inputs to dorsal-stream visuomotor areas [42-48].
Collapse
Affiliation(s)
- Robert L Whitwell
- Department of Psychology, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto 38068, Italy
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, Exeter EX1 2LU, UK
| | - Philippe A Chouinard
- Department of Psychology and Counselling, La Trobe University, Bendigo 3550, Australia
| | - Melvyn A Goodale
- Brain and Mind Institute, Department of Psychology, The University of Western Ontario, London, ON N6A 5C2, Canada
| |
Collapse
|
7
|
Schneider L, Dominguez-Vargas AU, Gibson L, Kagan I, Wilke M. Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades. J Neurophysiol 2020; 123:367-391. [DOI: 10.1152/jn.00432.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.
Collapse
Affiliation(s)
- Lukas Schneider
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Adan-Ulises Dominguez-Vargas
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Escuela Nacional de Estudios Superiores Unidad-León, Universidad Nacional Autónoma de México, León, Guanajuato, Mexico
| | - Lydia Gibson
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Melanie Wilke
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
8
|
Chen YC, Shih CL, Lin YT, Hwang IS. The effect of visuospatial resolution on discharge variability among motor units and force-discharge relation. CHINESE J PHYSIOL 2019; 62:166-174. [PMID: 31535632 DOI: 10.4103/cjp.cjp_12_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Although force steadiness varies with visuospatial information, accountable motor unit (MU) behaviors are not fully understood. This study investigated the modulation of MU discharges and force-discharge relation due to variations in the spatial resolution of visual feedback, with a particular focus on discharge variability among MUs. Fourteen young adults produced isometric force at 10% of maximal voluntary contraction (MVC) through index abduction, under the conditions of force trajectory displayed with low visual gain (LVG) and high visual gain (HVG). Together with smaller and more complex force fluctuations, HVG resulted in greater variabilities of the mean interspike interval and discharge irregularity among MUs than LVG did. Estimated via smoothening of a cumulative spike train of all MUs, global discharge rate was tuned to visual gain, with a more complex global discharge rate and a lower force-discharge relation in the HVG condition. These higher discharge variabilities were linked to larger variance of the common drive received by MUs for regulation of muscle force with higher visuospatial information. In summary, higher visuospatial information improves force steadiness with more complex force fluctuations, underlying joint effects of low-pass filter property of the musculotendon complex and central modulation of discharge variability among MUs.
Collapse
Affiliation(s)
- Yi-Ching Chen
- Department of Physical Therapy, Chung Shan Medical University; Physical Therapy Room, Chung Shan Medical University Hospital, Taichung City, Taiwan
| | - Chia-Li Shih
- Department of Rehabilitation Medicine, Tainan Municipal An-Nan Hospital, Tainan, Taiwan
| | - Yen-Ting Lin
- Physical Education Office, Asian University, Taichung City, Taiwan
| | - Ing-Shiou Hwang
- Institute of Allied Health Sciences; Department of Physical Therapy, College of Medicine, National Cheng Kung University, Tainan City, Taiwan
| |
Collapse
|
9
|
Representation of shape, space, and attention in monkey cortex. Cortex 2019; 122:40-60. [PMID: 31345568 DOI: 10.1016/j.cortex.2019.06.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 02/26/2019] [Accepted: 06/12/2019] [Indexed: 11/20/2022]
Abstract
Attentional deficits are core to numerous developmental, neurological, and psychiatric disorders. At the single-cell level, much knowledge has been garnered from studies of shape and spatial properties, as well as from numerous demonstrations of attentional modulation of those properties. Despite this wealth of knowledge of single-cell responses across many brain regions, little is known about how these cellular characteristics relate to population level representations and how such representations relate to behavior; in particular, how these cellular responses relate to the representation of shape, space, and attention, and how these representations differ across cortical areas and streams. Here we will emphasize the role of population coding as a missing link for connecting single-cell properties with behavior. Using a data-driven intrinsic approach to population decoding, we show that both 'what' and 'where' cortical visual streams encode shape, space, and attention, yet demonstrate striking differences in these representations. We suggest that both pathways fully process shape and space, but that differences in representation may arise due to their differing functions and input and output constraints. Moreover, differences in the effects of attention on shape and spatial population representations in the two visual streams suggest two distinct strategies: in a ventral area, attention or task demands modulate the population representations themselves (perhaps to expand or enhance one part at the expense of other parts) while in a dorsal area, at a population representation level, attention effects are weak and nearly non-existent, perhaps in order to maintain veridical representations needed for visuomotor control. We show that an intrinsic approach, as opposed to theory-driven and labeled approaches, is useful for understanding how representations develop and differ across brain regions. Most importantly, these approaches help link cellular properties more tightly with behavior, a much-needed step to better understand and interpret cellular findings and key to providing insights to improve interventions in human disorders.
Collapse
|
10
|
Chen J, Sperandio I, Henry MJ, Goodale MA. Changing the Real Viewing Distance Reveals the Temporal Evolution of Size Constancy in Visual Cortex. Curr Biol 2019; 29:2237-2243.e4. [PMID: 31257140 DOI: 10.1016/j.cub.2019.05.069] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 04/23/2019] [Accepted: 05/29/2019] [Indexed: 01/12/2023]
Abstract
Our visual system provides a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that some distance cues, especially oculomotor cues such as vergence and accommodation, can modulate the signals in the thalamus or V1 at the initial processing stage [1-7]. Accordingly, one might predict that size constancy emerges much earlier in time [8-10], even as visual signals are being processed in the thalamus. So far, the studies that have looked directly at size coding have either used fMRI (poor temporal resolution [11-13]) or relied on inadequate stimuli (pictorial illusions presented on a monitor at a fixed distance [11, 12, 14, 15]). Here, we physically moved the monitor to different distances, a more ecologically valid paradigm that emulates what happens in everyday life and is an example of the increasing trend of "bringing the real world into the lab." Using this paradigm in combination with electroencephalography (EEG), we examined the computation of size constancy in real time with real-world viewing conditions. Our study provides strong evidence that, even though oculomotor distance cues have been shown to modulate the spiking rate of neurons in the thalamus and in V1, the integration of viewing distance cues and retinal image size takes at least 150 ms to unfold, which suggests that the size-constancy-related activation patterns in V1 reported in previous fMRI studies (e.g., [12, 13]) reflect the later processing within V1 and/or top-down input from other high-level visual areas.
Collapse
Affiliation(s)
- Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province 510631, China; The Brain and Mind Institute, The University of Western Ontario, London, ON N6A 5B7, Canada.
| | - Irene Sperandio
- The School of Psychology, University of East Anglia, Norwich NR4 7TJ, UK.
| | - Molly J Henry
- The Brain and Mind Institute, The University of Western Ontario, London, ON N6A 5B7, Canada
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, ON N6A 5B7, Canada; Department of Psychology, The University of Western Ontario, London, ON N6A 5C2, Canada
| |
Collapse
|
11
|
Morris AP, Krekelberg B. A Stable Visual World in Primate Primary Visual Cortex. Curr Biol 2019; 29:1471-1480.e6. [PMID: 31031112 DOI: 10.1016/j.cub.2019.03.069] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 02/13/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022]
Abstract
Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina-and propagated throughout the visual cortical hierarchy-is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded "eye tracker" that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, 26 Innovation Walk, Clayton, Victoria 3800, Australia.
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Ave., Newark, New Jersey 07102, USA
| |
Collapse
|
12
|
Abstract
UNLABELLED The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. SIGNIFICANCE STATEMENT Why do we perceive the visual world as stable, although we constantly perform saccadic eye movements? We investigated how the visual system codes object locations in spatiotopic (i.e., external world) coordinates. We combined visual adaptation, in which the prolonged exposure to a specific visual feature alters perception, with fMRI adaptation, where the repeated presentation of a stimulus leads to a reduction in the BOLD amplitude. Functionally, adaptation was found in visual areas representing the retinal location of an adaptor but also at representations corresponding to its spatiotopic position. The results suggest that an active dynamic shift transports information in visual cortex to counteract the retinal displacement associated with saccade eye movements.
Collapse
|
13
|
Pigarev IN, Levichkina EV. Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions. Front Syst Neurosci 2016; 10:66. [PMID: 27547179 PMCID: PMC4974279 DOI: 10.3389/fnsys.2016.00066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 07/21/2016] [Indexed: 11/13/2022] Open
Abstract
Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.
Collapse
Affiliation(s)
- Ivan N Pigarev
- Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of Sciences Moscow, Russia
| | - Ekaterina V Levichkina
- Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of SciencesMoscow, Russia; Department of Optometry and Vision Sciences, The University of Melbourne, ParkvilleVIC, Australia
| |
Collapse
|
14
|
Marino AC, Mazer JA. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology. Front Syst Neurosci 2016; 10:3. [PMID: 26903820 PMCID: PMC4743436 DOI: 10.3389/fnsys.2016.00003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 01/15/2016] [Indexed: 11/13/2022] Open
Abstract
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron's spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed.
Collapse
Affiliation(s)
- Alexandria C Marino
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Medical Scientist Training Program, Yale University School of MedicineNew Haven, CT, USA
| | - James A Mazer
- Interdepartmental Neuroscience Program, Yale UniversityNew Haven, CT, USA; Department of Neurobiology, Yale University School of MedicineNew Haven, CT, USA; Department of Psychology, Yale UniversityNew Haven, CT, USA
| |
Collapse
|
15
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
16
|
Primate area V1: largest response gain for receptive fields in the straight-ahead direction. Neuroreport 2015; 25:1109-15. [PMID: 25055141 DOI: 10.1097/wnr.0000000000000235] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Although neuronal responses in behaving monkeys are typically studied while the monkey fixates straight ahead, it is known that eye position modulates responses of visual neurons. The modulation has been found to enhance neuronal responses when the receptive field is placed in the straight-ahead position for neurons receiving input from the peripheral but not the central retina. We studied the effect of eye position on the responses of V1 complex cells receiving input from the central retina (1.1-5.7° eccentricity) while minimizing the effect of fixational eye movements. Contrast response functions were obtained separately with drifting light and dark bars. Data were fit with the Naka-Rushton equation: r(c)=Rmax×c/(c+c50)+s, where r(c) is mean spike rate at contrast c, Rmax is the maximum response, c50 is the contrast that elicits half of Rmax, and s is the spontaneous activity. Contrast sensitivity as measured by c50 was not affected by eye position. For dark bars, there was a statistically significant decline in the normalized Rmax with increasing deviation from straight ahead. Data for bright bars showed a similar trend with a less rapid decline. Our results indicate that neurons representing the central retina show a bias for the straight-ahead position resulting from modulation of the response gain without an accompanying modulation of contrast sensitivity. The modulation is especially obvious for dark stimuli, which might be useful for directing attention to hazardous situations such as dark holes or shadows concealing important objects (Supplement 1: Video Abstract, Supplemental digital content 1, http://links.lww.com/WNR/A295).
Collapse
|
17
|
Arnoldussen DM, Goossens J, van Den Berg AV. Dissociation of retinal and headcentric disparity signals in dorsal human cortex. Front Syst Neurosci 2015; 9:16. [PMID: 25759642 PMCID: PMC4338660 DOI: 10.3389/fnsys.2015.00016] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/02/2015] [Indexed: 11/20/2022] Open
Abstract
Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT+), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT+, headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex.
Collapse
Affiliation(s)
- David M Arnoldussen
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands ; School of Psychology, University of Nottingham Nottingham, UK
| | - Jeroen Goossens
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| | - Albert V van Den Berg
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| |
Collapse
|
18
|
Funahashi S. Functions of delay-period activity in the prefrontal cortex and mnemonic scotomas revisited. Front Syst Neurosci 2015; 9:2. [PMID: 25698942 PMCID: PMC4318271 DOI: 10.3389/fnsys.2015.00002] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Accepted: 01/09/2015] [Indexed: 11/23/2022] Open
Abstract
Working memory (WM) is one of key concepts to understand functions of the prefrontal cortex. Delay-period activity is an important neural correlate to understand the role of WM in prefrontal functions. The importance of delay-period activity is that this activity can encode not only visuospatial information but also a variety of information including non-spatial visual features, auditory and tactile stimuli, task rules, expected reward, and numerical quantity. This activity also participates in a variety of information processing including sensory-to-motor information transformation. These mnemonic features of delay-period activity enable to perform various important operations that the prefrontal cortex participates in, such as executive controls, and therefore, support the notion that WM is an important function to understand prefrontal functions. On the other hand, although experiments using manual versions of the delayed-response task had revealed many important findings, an oculomotor version of this task enabled us to use multiple cue positions, exclude postural orientation during the delay period, and further prove the importance of mnemonic functions of the prefrontal cortex. In addition, monkeys with unilateral lesions exhibited specific impairment only in the performance of memory-guided saccades directed toward visual cues in the visual field contralateral to the lesioned hemisphere. This result indicates that memories for visuospatial coordinates in each hemifield are processed primarily in the contralateral prefrontal cortex. This result further strengthened the idea of mnemonic functions of the prefrontal cortex. Thus, the mnemonic functions of the prefrontal cortex and delay-period activity may not need to be reconsidered, but should be emphasized.
Collapse
|
19
|
Le QV, Isbell LA, Matsumoto J, Le VQ, Hori E, Tran AH, Maior RS, Tomaz C, Ono T, Nishijo H. Monkey pulvinar neurons fire differentially to snake postures. PLoS One 2014; 9:e114258. [PMID: 25479158 PMCID: PMC4257671 DOI: 10.1371/journal.pone.0114258] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Accepted: 11/05/2014] [Indexed: 11/18/2022] Open
Abstract
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.
Collapse
Affiliation(s)
- Quan Van Le
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Lynne A. Isbell
- Department of Anthropology, University of California Davis, Davis, California, 95616, United States of America
| | - Jumpei Matsumoto
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Van Quang Le
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Etsuro Hori
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Anh Hai Tran
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Rafael S. Maior
- Primate Center and Laboratory of Neurosciences and Behavior, Department of Physiological Sciences, Institute of Biology, University of Brasília, Brasilia, DF, Brazil
| | - Carlos Tomaz
- Primate Center and Laboratory of Neurosciences and Behavior, Department of Physiological Sciences, Institute of Biology, University of Brasília, Brasilia, DF, Brazil
| | - Taketoshi Ono
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hisao Nishijo
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- * E-mail:
| |
Collapse
|
20
|
Strappini F, Pitzalis S, Snyder AZ, McAvoy MP, Sereno MI, Corbetta M, Shulman GL. Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction. Brain Struct Funct 2014; 220:2587-601. [PMID: 24942135 PMCID: PMC4549389 DOI: 10.1007/s00429-014-0808-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 05/21/2014] [Indexed: 11/30/2022]
Abstract
Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Neurology, Washington University, School of Medicine, Saint Louis, MO, 63110, USA,
| | | | | | | | | | | | | |
Collapse
|
21
|
Sereno AB, Sereno ME, Lehky SR. Recovering stimulus locations using populations of eye-position modulated neurons in dorsal and ventral visual streams of non-human primates. Front Integr Neurosci 2014; 8:28. [PMID: 24734008 PMCID: PMC3975102 DOI: 10.3389/fnint.2014.00028] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Accepted: 03/08/2014] [Indexed: 11/13/2022] Open
Abstract
We recorded visual responses while monkeys fixated the same target at different gaze angles, both dorsally (lateral intraparietal cortex, LIP) and ventrally (anterior inferotemporal cortex, AIT). While eye-position modulations occurred in both areas, they were both more frequent and stronger in LIP neurons. We used an intrinsic population decoding technique, multidimensional scaling (MDS), to recover eye positions, equivalent to recovering fixated target locations. We report that eye-position based visual space in LIP was more accurate (i.e., metric). Nevertheless, the AIT spatial representation remained largely topologically correct, perhaps indicative of a categorical spatial representation (i.e., a qualitative description such as "left of" or "above" as opposed to a quantitative, metrically precise description). Additionally, we developed a simple neural model of eye position signals and illustrate that differences in single cell characteristics can influence the ability to recover target position in a population of cells. We demonstrate for the first time that the ventral stream contains sufficient information for constructing an eye-position based spatial representation. Furthermore we demonstrate, in dorsal and ventral streams as well as modeling, that target locations can be extracted directly from eye position signals in cortical visual responses without computing coordinate transforms of visual space.
Collapse
Affiliation(s)
- Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston Houston, TX, USA
| | | | - Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies La Jolla, CA, USA
| |
Collapse
|
22
|
Zhang E, Zhang GL, Li W. Spatiotopic perceptual learning mediated by retinotopic processing and attentional remapping. Eur J Neurosci 2013; 38:3758-67. [PMID: 24118649 DOI: 10.1111/ejn.12379] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 09/02/2013] [Indexed: 11/28/2022]
Abstract
Visual processing takes place in both retinotopic and spatiotopic frames of reference. Whereas visual perceptual learning is usually specific to the trained retinotopic location, our recent study has shown spatiotopic specificity of learning in motion direction discrimination. To explore the mechanisms underlying spatiotopic processing and learning, and to examine whether similar mechanisms also exist in visual form processing, we trained human subjects to discriminate an orientation difference between two successively displayed stimuli, with a gaze shift in between to manipulate their positional relation in the spatiotopic frame of reference without changing their retinal locations. Training resulted in better orientation discriminability for the trained than for the untrained spatial relation of the two stimuli. This learning-induced spatiotopic preference was seen only at the trained retinal location and orientation, suggesting experience-dependent spatiotopic form processing directly based on a retinotopic map. Moreover, a similar but weaker learning-induced spatiotopic preference was still present even if the first stimulus was rendered irrelevant to the orientation discrimination task by having the subjects judge the orientation of the second stimulus relative to its mean orientation in a block of trials. However, if the first stimulus was absent, and thus no attention was captured before the gaze shift, the learning produced no significant spatiotopic preference, suggesting an important role of attentional remapping in spatiotopic processing and learning. Taken together, our results suggest that spatiotopic visual representation can be mediated by interactions between retinotopic processing and attentional remapping, and can be modified by perceptual training.
Collapse
Affiliation(s)
- En Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | | | | |
Collapse
|
23
|
Abstract
To locate visual objects, the brain combines information about retinal location and direction of gaze. Studies in monkeys have demonstrated that eye position modulates the gain of visual signals with "gain fields," so that single neurons represent both retinotopic location and eye position. We wished to know whether eye position and retinotopic stimulus location are both represented in human visual cortex. Using functional magnetic resonance imaging, we measured separately for each of several different gaze positions cortical responses to stimuli that varied periodically in retinal locus. Visually evoked responses were periodic following the periodic retinotopic stimulation. Only the response amplitudes depended on eye position; response phases were indistinguishable across eye positions. We used multivoxel pattern analysis to decode eye position from the spatial pattern of response amplitudes. The decoder reliably discriminated eye position in five of the early visual cortical areas by taking advantage of a spatially heterogeneous eye position-dependent modulation of cortical activity. We conclude that responses in retinotopically organized visual cortical areas are modulated by gain fields qualitatively similar to those previously observed neurophysiologically.
Collapse
|
24
|
Larsson M. The optic chiasm: a turning point in the evolution of eye/hand coordination. Front Zool 2013; 10:41. [PMID: 23866932 PMCID: PMC3729728 DOI: 10.1186/1742-9994-10-41] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 07/09/2013] [Indexed: 01/23/2023] Open
Abstract
The primate visual system has a uniquely high proportion of ipsilateral retinal projections, retinal ganglial cells that do not cross the midline in the optic chiasm. The general assumption is that this developed due to the selective advantage of accurate depth perception through stereopsis. Here, the hypothesis that the need for accurate eye-forelimb coordination substantially influenced the evolution of the primate visual system is presented. Evolutionary processes may change the direction of retinal ganglial cells. Crossing, or non-crossing, in the optic chiasm determines which hemisphere receives visual feedback in reaching tasks. Each hemisphere receives little tactile and proprioceptive information about the ipsilateral hand. The eye-forelimb hypothesis proposes that abundant ipsilateral retinal projections developed in the primate brain to synthesize, in a single hemisphere, visual, tactile, proprioceptive, and motor information about a given hand, and that this improved eye-hand coordination and optimized the size of the brain. If accurate eye-hand coordination was a major factor in the evolution of stereopsis, stereopsis is likely to be highly developed for activity in the area where the hands most often operate.The primate visual system is ideally suited for tasks within arm's length and in the inferior visual field, where most manual activity takes place. Altering of ocular dominance in reaching tasks, reduced cross-modal cuing effects when arms are crossed, response of neurons in the primary motor cortex to viewed actions of a hand, multimodal neuron response to tactile as well as visual events, and extensive use of multimodal sensory information in reaching maneuvers support the premise that benefits of accurate limb control influenced the evolution of the primate visual system. The eye-forelimb hypothesis implies that evolutionary change toward hemidecussation in the optic chiasm provided parsimonious neural pathways in animals developing frontal vision and visually guided forelimbs, and also suggests a new perspective on vision convergence in prey and predatory animals.
Collapse
Affiliation(s)
- Matz Larsson
- The Cardiology Clinic, Örebro University Hospital, SE - 701 85, Örebro, Sweden.
| |
Collapse
|
25
|
Schafer AY, Ustinova KI. Does use of a virtual environment change reaching while standing in patients with traumatic brain injury? J Neuroeng Rehabil 2013; 10:76. [PMID: 23866962 PMCID: PMC3733631 DOI: 10.1186/1743-0003-10-76] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2012] [Accepted: 06/14/2013] [Indexed: 12/01/2023] Open
Abstract
Background Although numerous virtual reality applications have been developed for sensorimotor retraining in neurologically impaired individuals, it is unclear whether the virtual environment (VE) changes motor performance, especially in patients with brain injuries. To address this question, the movement characteristics of forward arm reaches during standing were compared in physical and virtual environments, presented at different viewing angles. Methods Fifteen patients with traumatic brain injuries (TBI) and 15 sex- and age-matched healthy individuals performed virtual reaches in a computer-generated courtyard with a flower-topped hedge. The hedge was projected on a flat screen and viewed in 3D format in 1 of 3 angles: 10° above horizon (resembling a real-world viewing angle), 50° above horizon, or 90° above horizon (directly overhead). Participants were instructed to reach with their dominant hand avatar and to touch the farthest flower possible without losing their balance or stepping. Virtual reaches were compared with reaches-to-point to a target in an equivalent physical environment. A set of kinematic parameters was used. Results Reaches by patients with TBI were characterized by shorter distances, lower peak velocities, and smaller postural displacements than reaches by control individuals. All participants reached ~9% farther in the VE presented at a 50° angle than they did in the physical environment. Arm displacement in the more natural 10° angle VE was reduced by the same 9-10% compared to physical reaches. Virtual reaches had smaller velocity peaks and took longer than physical reaches. Conclusion The results suggest that visual perception in the VE differs from real-world perception and the performance of functional tasks (e.g., reaching while standing) can be changed in TBI patients, depending on the viewing angle. Accordingly, the viewing angle is a critical parameter that should be adjusted carefully to achieve maximal therapeutic effect during practice in the VE.
Collapse
Affiliation(s)
- Amanda Y Schafer
- Department of Physical Therapy, Central Michigan University, Mount Pleasant, MI, USA
| | | |
Collapse
|
26
|
Hadjidimitrakis K, Breveglieri R, Bosco A, Fattori P. Three-dimensional eye position signals shape both peripersonal space and arm movement activity in the medial posterior parietal cortex. Front Integr Neurosci 2012; 6:37. [PMID: 22754511 PMCID: PMC3385520 DOI: 10.3389/fnint.2012.00037] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2012] [Accepted: 06/01/2012] [Indexed: 11/13/2022] Open
Abstract
Research conducted over the last decades has established that the medial part of posterior parietal cortex (PPC) is crucial for controlling visually guided actions in human and non-human primates. Within this cortical sector there is area V6A, a crucial node of the parietofrontal network involved in arm movement control in both monkeys and humans. However, the encoding of action-in-depth by V6A cells had been not studied till recently. Recent neurophysiological studies show the existence in V6A neurons of signals related to the distance of targets from the eyes. These signals are integrated, often at the level of single cells, with information about the direction of gaze, thus encoding spatial location in 3D space. Moreover, 3D eye position signals seem to be further exploited at two additional levels of neural processing: (a) in determining whether targets are located in the peripersonal space or not, and (b) in shaping the spatial tuning of arm movement related activity toward reachable targets. These findings are in line with studies in putative homolog regions in humans and together point to a role of medial PPC in encoding both the vergence angle of the eyes and peripersonal space. Besides its role in spatial encoding also in depth, several findings demonstrate the involvement of this cortical sector in non-spatial processes.
Collapse
Affiliation(s)
- K Hadjidimitrakis
- Department of Human and General Physiology, University of Bologna Bologna, Italy
| | | | | | | |
Collapse
|
27
|
Funahashi S. Space representation in the prefrontal cortex. Prog Neurobiol 2012; 103:131-55. [PMID: 22521602 DOI: 10.1016/j.pneurobio.2012.04.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Revised: 04/04/2012] [Accepted: 04/04/2012] [Indexed: 11/30/2022]
Abstract
The representation of space and its function in the prefrontal cortex have been examined using a variety of behavioral tasks. Among them, since the delayed-response task requires the temporary maintenance of spatial information, this task has been used to examine the mechanisms of spatial representation. In addition, the concept of working memory to explain prefrontal functions has helped us to understand the nature and functions of space representation in the prefrontal cortex. The detailed analysis of delay-period activity observed in spatial working memory tasks has provided important information for understanding space representation in the prefrontal cortex. Directional delay-period activity has been shown to be a neural correlate of the mechanism for temporarily maintaining information and represent spatial information for the visual cue and the saccade. In addition, many task-related prefrontal neurons exhibit spatially selective activities. These neurons are also important components of spatial information processing. In fact, information flow from sensory-related neurons to motor-related neurons has been demonstrated, along with a change in spatial representation as the trial progresses. The dynamic functional interactions among neurons exhibiting different task-related activities and representing different aspects of information could play an essential role in information processing. In addition, information provided from other cortical or subcortical areas might also be necessary for the representation of space in the prefrontal cortex. To better understand the representation of space and its function in the prefrontal cortex, we need to understand the nature of functional interactions between the prefrontal cortex and other cortical and subcortical areas.
Collapse
Affiliation(s)
- Shintaro Funahashi
- Kokoro Research Center, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan.
| |
Collapse
|
28
|
Eye position encoding in three-dimensional space: integration of version and vergence signals in the medial posterior parietal cortex. J Neurosci 2012; 32:159-69. [PMID: 22219279 DOI: 10.1523/jneurosci.4028-11.2012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Eye position signals are pivotal in the visuomotor transformations performed by the posterior parietal cortex (PPC), but to date there are few studies addressing the influence of vergence angle upon single PPC neurons. In the present study, we investigated the influence on single neurons of the medial PPC area V6A of vergence and version signals. Single-unit activity was recorded from V6A in two Macaca fascicularis fixating real targets in darkness. The fixation targets were placed at eye level and at different vergence and version angles within the peripersonal space. Few neurons were modulated by version or vergence only, while the majority of cells were affected by both signals. We advance here the hypothesis that gaze-modulated V6A cells are able to encode gazed positions in the three-dimensional space. In single cells, version and vergence influenced the discharge with variable time course. In several cases, the two gaze variables influence neural discharges during only a part of the fixation time, but, more often, their influence persisted through large parts of it. Cells discharging for the first 400-500 ms of fixation could signal the arrival of gaze (and/or of spotlight of attention) in a new position in the peripersonal space. Cells showing a more sustained activity during the fixation period could better signal the location in space of the gazed objects. Both signals are critical for the control of upcoming or ongoing arm movements, such as those needed to reach and grasp objects located in the peripersonal space.
Collapse
|
29
|
Differential visual processing for equivalent retinal information from near versus far space. Neuropsychologia 2011; 49:3863-9. [DOI: 10.1016/j.neuropsychologia.2011.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2011] [Revised: 10/01/2011] [Accepted: 10/03/2011] [Indexed: 10/16/2022]
|
30
|
Hadjidimitrakis K, Breveglieri R, Placenti G, Bosco A, Sabatini SP, Fattori P. Fix your eyes in the space you could reach: neurons in the macaque medial parietal cortex prefer gaze positions in peripersonal space. PLoS One 2011; 6:e23335. [PMID: 21858075 PMCID: PMC3157346 DOI: 10.1371/journal.pone.0023335] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2011] [Accepted: 07/14/2011] [Indexed: 11/30/2022] Open
Abstract
Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.
Collapse
Affiliation(s)
| | - Rossella Breveglieri
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Giacomo Placenti
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Annalisa Bosco
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
| | - Silvio P. Sabatini
- Department of Biophysical and Electronic Engineering, University of Genova, Genova, Italy
| | - Patrizia Fattori
- Department of Human and General Physiology, University of Bologna, Bologna, Italy
- * E-mail:
| |
Collapse
|
31
|
Pigarev IN, Levichkina EV. Distance modulated neuronal activity in the cortical visual areas of cats. Exp Brain Res 2011; 214:105-11. [PMID: 21818632 DOI: 10.1007/s00221-011-2810-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2010] [Accepted: 07/20/2011] [Indexed: 11/30/2022]
Abstract
During previous studies in cats and monkeys, it was found that in some neurons, responses to visual stimuli of the same angular size were dependent on the absolute distance to these stimuli. To study how widely this peculiarity of visual responses is distributed among cortical visual areas, we recorded activity of neurons in areas V4A, V2, V1, and frontal visual area on the lower bank of the cruciate sulcus. Neuronal activity was recorded at near (20 cm) or far (3 m) distances from a 3D stationary visual scene. Visual scenes were vertically corrugated light gray screens. Angular dimensions of the screens were the same at short and far distances. Eye movements were free during the test procedure. It was found that about 20% of neurons in areas V4A, V1, and frontal visual area had significantly different levels of activity, while animals were looking at the visual scenes located near or far from the eyes. No neurons with depth modulated activity were found in area V2.
Collapse
Affiliation(s)
- I N Pigarev
- Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of Sciences, Moscow, Russia.
| | | |
Collapse
|
32
|
Williams AL, Smith AT. Representation of Eye Position in the Human Parietal Cortex. J Neurophysiol 2010; 104:2169-77. [DOI: 10.1152/jn.00713.2009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons that signal eye position are thought to make a vital contribution to distinguishing real world motion from retinal motion caused by eye movements, but relatively little is known about such neurons in the human brain. Here we present data from functional MRI experiments that are consistent with the existence of neurons sensitive to eye position in darkness in the human posterior parietal cortex. We used the enhanced sensitivity of multivoxel pattern analysis (MVPA) techniques, combined with a searchlight paradigm, to isolate brain regions sensitive to direction of gaze. During data acquisition, participants were cued to direct their gaze to the left or right for sustained periods as part of a block-design paradigm. Following the exclusion of saccade-related activity from the data, the multivariate analysis showed sensitivity to tonic eye position in two localized posterior parietal regions, namely the dorsal precuneus and, more weakly, the posterior aspect of the intraparietal sulcus. Sensitivity to eye position was also seen in anterior portions of the occipital cortex. The observed sensitivity of visual cortical neurons to eye position, even in the total absence of visual stimulation, is possibly a result of feedback from posterior parietal regions that receive eye position signals and explicitly encode direction of gaze.
Collapse
Affiliation(s)
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| |
Collapse
|
33
|
Abstract
Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This well-known perceptual-learning phenomenon is usually specific to the trained retinal- or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head- and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli.
Collapse
|
34
|
Durand JB, Trotter Y, Celebrini S. Privileged Processing of the Straight-Ahead Direction in Primate Area V1. Neuron 2010; 66:126-37. [DOI: 10.1016/j.neuron.2010.03.014] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2010] [Indexed: 10/19/2022]
|
35
|
Abstract
Visual scene interpretation depends on assumptions based on the statistical regularities of the world. People have some preference for seeing ambiguously oriented objects (Necker cubes) as if tilted down or viewed from above. This bias is a near certainty in the first instant (∼1 s) of viewing and declines over the course of many seconds. In addition, we found that there is modulation of perceived orientation that varies with position—for example objects on the left are more likely to be interpreted as viewed from the right. Therefore there is both a viewed-from-above prior and a scene position-dependent modulation of perceived 3-D orientation. These results are consistent with the idea that ambiguously oriented objects are initially assigned an orientation consistent with our experience of an asymmetric world in which objects most probably sit on surfaces below eye level.
Collapse
Affiliation(s)
- Allan C Dobbins
- Department of Biomedical Engineering & Vision Science Research Center, University of Alabama at Birmingham, Birmingham, Alabama, United States of America.
| | | |
Collapse
|
36
|
Coombes SA, Corcos DM, Sprute L, Vaillancourt DE. Selective regions of the visuomotor system are related to gain-induced changes in force error. J Neurophysiol 2010; 103:2114-23. [PMID: 20181732 DOI: 10.1152/jn.00920.2009] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When humans perform movements and receive on-line visual feedback about their performance, the spatial qualities of the visual information alter performance. The spatial qualities of visual information can be altered via the manipulation of visual gain and changes in visual gain lead to changes in force error. The current study used functional magnetic resonance imaging during a steady-state precision grip force task to examine how cortical and subcortical brain activity can change with visual gain induced changes in force error. Small increases in visual gain < 1° were associated with a substantial reduction in force error and a small increase in the spatial amplitude of visual feedback. These behavioral effects corresponded with an increase in activation bilaterally in V3 and V5 and in left primary motor cortex and left ventral premotor cortex. Large increases in visual gain > 1° were associated with a small change in force error and a large change in the spatial amplitude of visual feedback. These behavioral effects corresponded with increased activity bilaterally in dorsal and ventral premotor areas and right inferior parietal lobule. Finally, activity in the left and right lobule VI of the cerebellum and left and right putamen did not change with increases in visual gain. Together, these findings demonstrate that the visuomotor system does not respond uniformly to changes in the gain of visual feedback. Instead, specific regions of the visuomotor system selectively change in activity related to large changes in force error and large changes in the spatial amplitude of visual feedback.
Collapse
Affiliation(s)
- Stephen A Coombes
- Department of Kinesiology and Nutrition, University of Illinois at Chicago, 1919 West Taylor, 650 AHSB, M/C 994, Chicago, IL 60612, USA.
| | | | | | | |
Collapse
|
37
|
Ustinova K, Perkins J, Szostakowski L, Tamkei L, Leonard W. Effect of viewing angle on arm reaching while standing in a virtual environment: potential for virtual rehabilitation. Acta Psychol (Amst) 2010; 133:180-90. [PMID: 20021998 DOI: 10.1016/j.actpsy.2009.11.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2009] [Revised: 11/08/2009] [Accepted: 11/14/2009] [Indexed: 10/20/2022] Open
Abstract
Functional arm movements, such as reaching while standing, are planned and executed according to our perception of body position in space and are relative to environmental objects. The angle under which the environment is observed is one component used in creating this perception. This suggests that manipulation of viewing angle may modulate whole body movement to affect performance. We tested this by comparing its effect on reaching in a virtually generated environment. Eleven young healthy individuals performed forward and lateral reaches in the virtual environment, presented on a flat screen in third-person perspective. Participants saw a computer-generated model (avatar) of themselves standing in a courtyard facing a semi-circular hedge with flowers. The image was presented in five different viewing angles ranging from seeing the avatar from behind (0 degrees), to viewing from overhead (90 degrees). Participants attempted to touch the furthest flower possible without losing balance or stepping. Kinematic data were collected to analyze endpoint displacement, arm-postural coordination and center of mass (COM) displacement. Results showed that reach distance was greatest with angular perspectives of approximately 45-77.5 degrees , which are larger than those used in analogous real world situations. Larger reaches were characterized by increased involvement of leg and trunk body segments, altered inter-segmental coordination, and decreased inter-segmental movement time lag. Thus a viewing angle can be a critical visuomotor variable modulating motor coordination of the whole body and related functional performance. These results can be used in designing virtual reality games, in ergonomic design, teleoperation training, and in designing virtual rehabilitation programs that re-train functional movement in vulnerable individuals.
Collapse
|
38
|
Reaching in depth: hand position dominates over binocular eye position in the rostral superior parietal lobule. J Neurosci 2009; 29:11461-70. [PMID: 19759295 DOI: 10.1523/jneurosci.1305-09.2009] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Neural activity was recorded in area PE (dorsorostral part of Brodmann's area 5) of the posterior parietal cortex while monkeys performed arm reaching toward memorized targets located at different distances from the body. For any given distance, arm movements were performed while the animal kept binocular eye fixation constant. Under these conditions, the activity of a large proportion (36%) of neurons was modulated by reach distance during the memory period. By varying binocular eye position (vergence angle) and initial hand position, we found that the reaching-related activity of most neurons (61%) was influenced by changing the starting position of the hand, whereas that of a smaller, although substantial, population (13%) was influenced by changes of binocular eye position (i.e., by the angle of vergence). Furthermore, the modulation of the neural activity was better explained expressing the reach movement end-point, corresponding to the memorized target location, in terms of distance from the initial hand position, rather than from the body. These results suggest that the activity of neurons in area PE combines information about eye and hand position to encode target distance for reaching in depth predominantly in hand coordinates. This encoding mechanism is consistent with the position of PE in the functional gradient that characterizes the parieto-frontal network underlying reaching.
Collapse
|
39
|
Macaluso E. Orienting of spatial attention and the interplay between the senses. Cortex 2009; 46:282-97. [PMID: 19540475 DOI: 10.1016/j.cortex.2009.05.010] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2009] [Revised: 04/27/2009] [Accepted: 05/14/2009] [Indexed: 11/30/2022]
Abstract
Many everyday situations require combining complex sensory signals about the external world with ongoing goals and expectations. Here I examine the role of attention in this process and consider the underlying neural substrates. First, mechanisms of spatial attention in the visual modality are reviewed, emphasising the involvement of fronto-parietal cortex. Spatial attention takes into account endogenous factors, e.g., information about behavioural relevance, as well as signals arising from the external world (stimulus-driven control). Stimulus-driven control is thought to take place automatically and independently from endogenous factors. However, recent findings demonstrate that endogenous and stimulus-driven mechanisms co-operate, jointly contributing for the selection of the relevant spatial location. Next, I will turn to studies of multisensory spatial attention. These have shown that attention control in fronto-parietal cortex operates supramodally. Supramodal control exerts top-down influences onto sensory-specific areas, enhancing the processing of stimuli at the attended location irrespective of modality. Unlike unimodal visual attention, but in line with traditional views of multisensory integration, multisensory attention can operate in a fully automatic manner regardless of relevance and task-set. I discuss these findings in relation to functional/anatomical pathways that may mediate multisensory attention control, highlighting possible links between spatial attention and multisensory integration of space.
Collapse
Affiliation(s)
- Emiliano Macaluso
- Neuroimaging Laboratory, Santa Lucia Foundation, via Ardeatina 306, Rome, Italy.
| |
Collapse
|
40
|
Bhattacharyya R, Musallam S, Andersen RA. Parietal reach region encodes reach depth using retinal disparity and vergence angle signals. J Neurophysiol 2009; 102:805-16. [PMID: 19439678 DOI: 10.1152/jn.90359.2008] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Performing a visually guided reach requires the ability to perceive the egocentric distance of a target in three-dimensional space. Previous studies have shown that the parietal reach region (PRR) encodes the two-dimensional location of frontoparallel targets in an eye-centered reference frame. To investigate how a reach target is represented in three dimensions, we recorded the spiking activity of PRR neurons from two rhesus macaques trained to fixate and perform memory reaches to targets at different depths. Reach and fixation targets were configured to explore whether neural activity directly reflects egocentric distance as the amplitude of the required motor command, which is the absolute depth of the target, or rather the relative depth of the target with reference to fixation depth. We show that planning activity in PRR represents the depth of the reach target as a function of disparity and fixation depth, the spatial parameters important for encoding the depth of a reach goal in an eye centered reference frame. The strength of modulation by disparity is maintained across fixation depth. Fixation depth gain modulates disparity tuning while preserving the location of peak tuning features in PRR neurons. The results show that individual PRR neurons code depth with respect to the fixation point, that is, in eye centered coordinates. However, because the activity is gain modulated by vergence angle, the absolute depth can be decoded from the population activity.
Collapse
Affiliation(s)
- Rajan Bhattacharyya
- Computation and Neural Systems, California Institute of Technology, Pasadena, California 91125, USA
| | | | | |
Collapse
|
41
|
Bédard P, Sanes JN. Gaze and hand position effects on finger-movement-related human brain activation. J Neurophysiol 2008; 101:834-42. [PMID: 19005002 DOI: 10.1152/jn.90683.2008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans commonly use their hands to move and to interact with their environment by processing visual and proprioceptive information to determine the location of a goal-object and the initial hand position. It remains elusive, however, how the human brain fully uses this sensory information to generate accurate movements. In monkeys, it appears that frontal and parietal areas use and combine gaze and hand signals to generate movements, whereas in humans, prior work has separately assessed how the brain uses these two signals. Here we investigated whether and how the human brain integrates gaze orientation and hand position during simple visually triggered finger tapping. We hypothesized that parietal, frontal, and subcortical regions involved in movement production would also exhibit modulation of movement-related activation as a function of gaze and hand positions. We used functional MRI to measure brain activation while healthy young adults performed a visually cued finger movement and fixed gaze at each of three locations and held the arm in two different configurations. We found several areas that exhibited activation related to a mixture of these hand and gaze positions; these included the sensory-motor cortex, supramarginal gyrus, superior parietal lobule, superior frontal gyrus, anterior cingulate, and left cerebellum. We also found regions within the left insula, left cuneus, left midcingulate gyrus, left putamen, and right tempo-occipital junction with activation driven only by gaze orientation. Finally, clusters with hand position effects were found in the cerebellum bilaterally. Our results indicate that these areas integrate at least two signals to perform visual-motor actions and that these could be used to subserve sensory-motor transformations.
Collapse
Affiliation(s)
- Patrick Bédard
- Department of Neuroscience, Alpert Medical School, Brown University, 185 Meeting St., Box GL-N, Providence, RI 02912, USA
| | | |
Collapse
|
42
|
Lehky SR, Peng X, McAdams CJ, Sereno AB. Spatial modulation of primate inferotemporal responses by eye position. PLoS One 2008; 3:e3492. [PMID: 18946508 PMCID: PMC2567040 DOI: 10.1371/journal.pone.0003492] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2008] [Accepted: 09/15/2008] [Indexed: 01/19/2023] Open
Abstract
Background A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.
Collapse
Affiliation(s)
- Sidney R. Lehky
- Computational Neuroscience Laboratory, The Salk Institute, La Jolla, California, United States of America
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Xinmiao Peng
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Carrie J. McAdams
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| | - Anne B. Sereno
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
43
|
Collignon O, Davare M, De Volder AG, Poirier C, Olivier E, Veraart C. Time-course of posterior parietal and occipital cortex contribution to sound localization. J Cogn Neurosci 2008; 20:1454-63. [PMID: 18303980 DOI: 10.1162/jocn.2008.20102] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing.
Collapse
Affiliation(s)
- Olivier Collignon
- Neural Rehabilitation Engineering Laboratory, Université Catholique de Louvain, Brussels, Belgium.
| | | | | | | | | | | |
Collapse
|
44
|
Van Pelt S, Medendorp WP. Updating Target Distance Across Eye Movements in Depth. J Neurophysiol 2008; 99:2281-90. [DOI: 10.1152/jn.01281.2007] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements ( n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Collapse
|
45
|
Bédard P, Thangavel A, Sanes JN. Gaze influences finger movement-related and visual-related activation across the human brain. Exp Brain Res 2008; 188:63-75. [PMID: 18350284 DOI: 10.1007/s00221-008-1339-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2007] [Accepted: 02/28/2008] [Indexed: 11/25/2022]
Abstract
The brain uses gaze orientation to organize myriad spatial tasks including hand movements. However, the neural correlates of gaze signals and their interaction with brain systems for arm movement control remain unresolved. Many studies have shown that gaze orientation modifies neuronal spike discharge in monkeys and activation in humans related to reaching and finger movements in parietal and frontal areas. To continue earlier studies that addressed interaction of horizontal gaze and hand movements in humans (Baker et al. 1999), we assessed how horizontal and vertical gaze deviations modified finger-related activation, hypothesizing that areas throughout the brain would exhibit movement-related activation that depended on gaze angle. The results indicated finger movement-related activation related to combinations of horizontal, vertical, and diagonal gaze deviations. We extended our prior findings to observation of these gaze-dependent effects in visual cortex, parietal cortex, motor, supplementary motor area, putamen, and cerebellum. Most significantly, we found a modulation bias for increased activation toward rightward, upper-right and vertically upward gaze deviations. Our results indicate that gaze modulation of finger movement-related regions in the human brain is spatially organized and could subserve sensorimotor transformations.
Collapse
Affiliation(s)
- Patrick Bédard
- Department of Neuroscience, Alpert Medical School of Brown University, Box GL-N, Providence, RI 02912, USA
| | | | | |
Collapse
|
46
|
The coding of perceived eye position. Exp Brain Res 2008; 187:429-37. [DOI: 10.1007/s00221-008-1313-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2007] [Accepted: 02/11/2008] [Indexed: 12/23/2022]
|
47
|
Quinlan DJ, Culham JC. fMRI reveals a preference for near viewing in the human parieto-occipital cortex. Neuroimage 2007; 36:167-87. [PMID: 17398117 DOI: 10.1016/j.neuroimage.2007.02.029] [Citation(s) in RCA: 89] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2006] [Revised: 02/12/2007] [Accepted: 02/13/2007] [Indexed: 10/23/2022] Open
Abstract
Posterior parietal cortex in primates contains several functional areas associated with visual control of body effectors (e.g., arm, hand and head) which contain neurons tuned to specific depth ranges appropriate for the effector. For example, the macaque ventral intraparietal area (VIP) is involved in head movements and is selective for motion in near-space around the head. We used functional magnetic resonance imaging to examine activation in the putative human VIP homologue (pVIP), as well as parietal and occipital cortex, as a function of viewing distance when multiple cues to target depth were available (Expt 1) and when only oculomotor cues were available (Expt 2). In Experiment 1, subjects viewed stationary or moving disks presented at three distances (with equal retinal sizes). Although activation in pVIP showed no preference for any particular spatial range, the dorsal parieto-occipital sulcus (dPOS) demonstrated a near-space preference, with activation highest for near viewing, moderate for arm's length viewing, and lowest for far viewing. In Experiment 2, we investigated whether the near response alone (convergence of the eyes, accommodation of the lens and pupillary constriction) was sufficient to elicit this same activation pattern. Subjects fixated lights presented at three distances which were illuminated singly (with luminance and visual angle equated across distances). dPOS displayed the same gradient of activation (Near>Medium>Far) as that seen in Experiment 1, even with reduced cues to depth. dPOS seems to reflect the status of the near response (perhaps driven largely by vergence angle) and may provide areas in the dorsal visual stream with spatial information useful for guiding actions toward targets in depth.
Collapse
Affiliation(s)
- D J Quinlan
- Neuroscience Program, Social Science Centre, The University of Western Ontario, London, Ontario, Canada N6A 5C2.
| | | |
Collapse
|
48
|
Noest AJ, van Ee R, van den Berg AV. Direct extraction of curvature-based metric shape from stereo by view-modulated receptive fields. BIOLOGICAL CYBERNETICS 2006; 95:455-86. [PMID: 16955316 DOI: 10.1007/s00422-006-0101-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2006] [Accepted: 06/30/2006] [Indexed: 05/11/2023]
Abstract
Any computation of metric surface structure from horizontal disparities depends on the viewing geometry, and analysing this dependence allows us to narrow down the choice of viable schemes. For example, all depth-based or slant-based schemes (i.e. nearly all existing models) are found to be unrealistically sensitive to natural errors in vergence. Curvature-based schemes avoid these problems and require only moderate, more robust view-dependent corrections to yield local object shape, without any depth coding. This fits the fact that humans are strikingly insensitive to global depth but accurate in discriminating surface curvature. The latter also excludes coding only affine structure. In view of new adaptation results, our goal becomes to directly extract retinotopic fields of metric surface curvatures (i.e. avoiding intermediate disparity curvature). To find a robust neural realisation, we combine new exact analysis with basic neural and psychophysical constraints. Systematic, step-by-step 'design' leads to neural operators which employ a novel family of 'dynamic' receptive fields (RFs), tuned to specific (bi-)local disparity structure. The required RF family is dictated by the non-Euclidean geometry that we identify as inherent in cyclopean vision. The dynamic RF-subfield patterns are controlled via gain modulation by binocular vergence and version, and parameterised by a cell-specific tuning to slant. Our full characterisation of the neural operators invites a range of new neurophysiological tests. Regarding shape perception, the model inverts widely accepted interpretations: It predicts the various types of errors that have often been mistaken for evidence against metric shape extraction.
Collapse
Affiliation(s)
- A J Noest
- Functional Neurobiology Department, Utrecht University, NEST, Limalaan 30, 3584-CL, Utrecht, The Netherlands.
| | | | | |
Collapse
|
49
|
Isbell LA. Snakes as agents of evolutionary change in primate brains. J Hum Evol 2006; 51:1-35. [PMID: 16545427 DOI: 10.1016/j.jhevol.2005.12.012] [Citation(s) in RCA: 188] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2004] [Revised: 10/04/2005] [Accepted: 12/28/2005] [Indexed: 10/24/2022]
Abstract
Current hypotheses that use visually guided reaching and grasping to explain orbital convergence, visual specialization, and brain expansion in primates are open to question now that neurological evidence reveals no correlation between orbital convergence and the visual pathway in the brain that is associated with reaching and grasping. An alternative hypothesis proposed here posits that snakes were ultimately responsible for these defining primate characteristics. Snakes have a long, shared evolutionary existence with crown-group placental mammals and were likely to have been their first predators. Mammals are conservative in the structures of the brain that are involved in vigilance, fear, and learning and memory associated with fearful stimuli, e.g., predators. Some of these areas have expanded in primates and are more strongly connected to visual systems. However, primates vary in the extent of brain expansion. This variation is coincident with variation in evolutionary co-existence with the more recently evolved venomous snakes. Malagasy prosimians have never co-existed with venomous snakes, New World monkeys (platyrrhines) have had interrupted co-existence with venomous snakes, and Old World monkeys and apes (catarrhines) have had continuous co-existence with venomous snakes. The koniocellular visual pathway, arising from the retina and connecting to the lateral geniculate nucleus, the superior colliculus, and the pulvinar, has expanded along with the parvocellular pathway, a visual pathway that is involved with color and object recognition. I suggest that expansion of these pathways co-occurred, with the koniocellular pathway being crucially involved (among other tasks) in pre-attentional visual detection of fearful stimuli, including snakes, and the parvocellular pathway being involved (among other tasks) in protecting the brain from increasingly greater metabolic demands to evolve the neural capacity to detect such stimuli quickly. A diet that included fruits or nectar (though not to the exclusion of arthropods), which provided sugars as a neuroprotectant, may have been a required preadaptation for the expansion of such metabolically active brains. Taxonomic differences in evolutionary exposure to venomous snakes are associated with similar taxonomic differences in rates of evolution in cytochrome oxidase genes and in the metabolic activity of cytochrome oxidase proteins in at least some visual areas in the brains of primates. Raptors that specialize in eating snakes have larger eyes and greater binocularity than more generalized raptors, and provide non-mammalian models for snakes as a selective pressure on primate visual systems. These models, along with evidence from paleobiogeography, neuroscience, ecology, behavior, and immunology, suggest that the evolutionary arms race begun by constrictors early in mammalian evolution continued with venomous snakes. Whereas other mammals responded by evolving physiological resistance to snake venoms, anthropoids responded by enhancing their ability to detect snakes visually before the strike.
Collapse
Affiliation(s)
- Lynne A Isbell
- Department of Anthropology, University of California, Davis, 95616, USA.
| |
Collapse
|
50
|
Vaillancourt DE, Haibach PS, Newell KM. Visual angle is the critical variable mediating gain-related effects in manual control. Exp Brain Res 2006; 173:742-50. [PMID: 16604313 PMCID: PMC2366211 DOI: 10.1007/s00221-006-0454-2] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2005] [Accepted: 03/15/2006] [Indexed: 10/24/2022]
Abstract
Theoretically visual gain has been identified as a control variable in models of isometric force. However, visual gain is typically confounded with visual angle and distance, and the relative contribution of visual gain, distance, and angle to the control of force remains unclear. This study manipulated visual gain, distance, and angle in three experiments to examine the visual information properties used to regulate the control of a constant level of isometric force. Young adults performed a flexion motion of the index finger of the dominant hand in 20 s trials under a range of parameter values of the three visual variables. The findings demonstrate that the amount and structure of the force fluctuations were organized around the variable of visual angle, rather than gain or distance. Furthermore, the amount and structure of the force fluctuations changed considerably up to 1 degrees , with little change higher than a 1 degrees visual angle. Visual angle is the critical informational variable for the visuomotor system during the control of isometric force.
Collapse
Affiliation(s)
- David E Vaillancourt
- Department of Movement Sciences (M/C 994), University of Illinois at Chicago, 808 S. Wood St., 690 CME, Chicago, IL 60612, USA.
| | | | | |
Collapse
|