101
|
Raffi M, Persiani M, Piras A, Squatrito S. Optic flow neurons in area PEc integrate eye and head position signals. Neurosci Lett 2014; 568:23-8. [PMID: 24690577 DOI: 10.1016/j.neulet.2014.03.042] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2014] [Revised: 03/11/2014] [Accepted: 03/17/2014] [Indexed: 10/25/2022]
Abstract
Neurons in area PEc, a visual area located in the superior parietal lobule, are activated by optic flow stimuli. An important issue is whether PEc neurons are able to integrate multimodal signals, such as those related to optic flow selectivity with those about eye and head position. The aim of this study was to assess if angle of gaze and/or head rotation modify the spatial representation of the focus of expansion (FOE), varying FOE, fixation point and head position in space. We found that the rotation of head modulated the firing activity of PEc optic flow neurons. The head position also changed the angle of gaze effect on the PEc neuronal activity. All recorded neurons showed a main interaction effect between head and eye position upon the selectivity for optic flow stimuli. These results seem to suggest that PEc optic flow neurons use different reference frames depending on the position of the eye and/or the head in space emphasizing a possible contribution of this area in guiding locomotion by integrating multiple extraretinal inputs.
Collapse
Affiliation(s)
- Milena Raffi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.
| | - Michela Persiani
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Alessandro Piras
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Salvatore Squatrito
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
102
|
Brostek L, Büttner U, Mustari MJ, Glasauer S. Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation. Cereb Cortex 2014; 25:2181-90. [PMID: 24557636 DOI: 10.1093/cercor/bhu024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR.
Collapse
Affiliation(s)
- Lukas Brostek
- Clinical Neurosciences Bernstein Center for Computational Neuroscience, Munich 81377, Germany
| | - Ulrich Büttner
- Clinical Neurosciences German Vertigo Center IFB, Ludwig-Maximilians-Universität , Munich 81377, Germany
| | - Michael J Mustari
- Department of Ophthalmology and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Stefan Glasauer
- Clinical Neurosciences German Vertigo Center IFB, Ludwig-Maximilians-Universität , Munich 81377, Germany Bernstein Center for Computational Neuroscience, Munich 81377, Germany
| |
Collapse
|
103
|
Sasaoka T, Mizuhara H, Inui T. Dynamic Parieto-premotor Network for Mental Image Transformation Revealed by Simultaneous EEG and fMRI Measurement. J Cogn Neurosci 2014; 26:232-46. [PMID: 24116844 DOI: 10.1162/jocn_a_00493] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Previous studies have suggested that the posterior parietal cortices and premotor areas are involved in mental image transformation. However, it remains unknown whether these regions really cooperate to realize mental image transformation. In this study, simultaneous EEG and fMRI were performed to clarify the spatio-temporal properties of neural networks engaged in mental image transformation. We adopted a modified version of the mental clock task used by Sack et al. [Sack, A. T., Camprodon, J. A., Pascual-Leone, A., & Goebel, R. The dynamics of interhemispheric compensatory processes in mental imagery. Science, 308, 702–704, 2005; Sack, A. T., Sperling, J. M., Prvulovic, D., Formisano, E., Goebel, R., Di Salle, F., et al. Tracking the mind's image in the brain II: Transcranial magnetic stimulation reveals parietal asymmetry in visuospatial imagery. Neuron, 35, 195–204, 2002]. In the modified mental clock task, participants mentally rotated clock hands from the position initially presented at a learned speed for various durations. Subsequently, they matched the position to the visually presented clock hands. During mental rotation of the clock hands, we observed significant beta EEG suppression with respect to the amount of mental rotation at the right parietal electrode. The beta EEG suppression accompanied activity in the bilateral parietal cortices and left premotor cortex, representing a dynamic cortical network for mental image transformation. These results suggest that motor signals from the premotor area were utilized for mental image transformation in the parietal areas and for updating the imagined clock hands represented in the right posterior parietal cortex.
Collapse
|
104
|
Bourrelly A, Vercher JL, Bringoux L. To pass or not to pass: more a question of body orientation than visual cues. Q J Exp Psychol (Hove) 2014; 67:1668-81. [PMID: 24224565 DOI: 10.1080/17470218.2013.864687] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
This study investigated the influence of pitch body tilt on judging the possibility of passing under high obstacles in the presence of an illusory horizontal self-motion. Seated subjects tilted at various body orientations were asked to estimate the possibility of passing under a projected bar (i.e., a parking barrier), while imagining a forward whole-body displacement normal to gravity. This task was performed under two visual conditions, providing either no visual surroundings or a translational horizontal optic flow that stopped just before the barrier appeared. The results showed a main overestimation of the possibility of passing under the bar in both cases and most importantly revealed a strong influence of body orientation despite the visual specification of horizontal self-motion by optic flow (i.e., both visual conditions yielded a comparable body tilt effect). Specifically, the subjective passability was proportionally deviated towards the body tilt by 46% of its magnitude when facing a horizontal optic flow and 43% without visual surroundings. This suggests that the egocentric attraction exerted by body tilt when referring the subjective passability to horizontal self-motion still persists even when anchoring horizontally related visual cues are displayed. These findings are discussed in terms of interaction between spatial references. The link between the reliability of available sensory inputs and the weight attributed to each reference is also addressed.
Collapse
Affiliation(s)
- A Bourrelly
- a Aix-Marseille Université, CNRS, ISM UMR, Marseille , France
| | | | | |
Collapse
|
105
|
Szczepanski SM, Saalmann YB. Human fronto-parietal and parieto-hippocampal pathways represent behavioral priorities in multiple spatial reference frames. BIOARCHITECTURE 2013; 3:147-52. [PMID: 24322829 PMCID: PMC3907462 DOI: 10.4161/bioa.27462] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
We represent behaviorally relevant information in different spatial reference frames in order to interact effectively with our environment. For example, we need an egocentric (e.g., body-centered) reference frame to specify limb movements and an allocentric (e.g., world-centered) reference frame to navigate from one location to another. Posterior parietal cortex (PPC) is vital for performing transformations between these different coordinate systems. Here, we review evidence for multiple pathways in the human brain, from PPC to motor, premotor, and supplementary motor areas, as well as to structures in the medial temporal lobe. These connections are important for transformations between egocentric reference frames to facilitate sensory-guided action, or from egocentric to allocentric reference frames to facilitate spatial navigation.
Collapse
Affiliation(s)
- Sara M Szczepanski
- Helen Wills Neuroscience Institute; University of California; Berkeley, Berkeley, CA USA
| | - Yuri B Saalmann
- Department of Psychology, University of Wisconsin-Madison; Madison, WI USA
| |
Collapse
|
106
|
Chen X, Deangelis GC, Angelaki DE. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 2013; 80:1310-21. [PMID: 24239126 DOI: 10.1016/j.neuron.2013.09.006] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2013] [Indexed: 10/26/2022]
Abstract
Reference frames are important for understanding how sensory cues from different modalities are coordinated to guide behavior, and the parietal cortex is critical to these functions. We compare reference frames of vestibular self-motion signals in the ventral intraparietal area (VIP), parietoinsular vestibular cortex (PIVC), and dorsal medial superior temporal area (MSTd). Vestibular heading tuning in VIP is invariant to changes in both eye and head positions, indicating a body (or world)-centered reference frame. Vestibular signals in PIVC have reference frames that are intermediate between head and body centered. In contrast, MSTd neurons show reference frames between head and eye centered but not body centered. Eye and head position gain fields were strongest in MSTd and weakest in PIVC. Our findings reveal distinct spatial reference frames for representing vestibular signals and pose new challenges for understanding the respective roles of these areas in potentially diverse vestibular functions.
Collapse
Affiliation(s)
- Xiaodong Chen
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | | | | |
Collapse
|
107
|
Boerlin M, Machens CK, Denève S. Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol 2013; 9:e1003258. [PMID: 24244113 PMCID: PMC3828152 DOI: 10.1371/journal.pcbi.1003258] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2012] [Accepted: 08/21/2013] [Indexed: 11/24/2022] Open
Abstract
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. Two observations about the cortex have puzzled and fascinated neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks representing information reliably and with a small number of spikes. To achieve such efficiency, spikes of individual neurons must communicate prediction errors about a common population-level signal, automatically resulting in balanced excitation and inhibition and highly variable neural responses. We illustrate our approach by focusing on the implementation of linear dynamical systems. Among other things, this allows us to construct a network of spiking neurons that can integrate input signals, yet is robust against many perturbations. Most importantly, our approach shows that neural variability cannot be equated to noise. Despite exhibiting the same single unit properties as other widely used network models, our balanced networks are orders of magnitudes more reliable. Our results suggest that the precision of cortical representations has been strongly underestimated.
Collapse
Affiliation(s)
- Martin Boerlin
- Group for Neural Theory, Département d'Études Cognitives, École Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Sophie Denève
- Group for Neural Theory, Département d'Études Cognitives, École Normale Supérieure, Paris, France
- * E-mail:
| |
Collapse
|
108
|
Abstract
AbstractWe have argued that the neurocognitive representation of large-scale, navigable three-dimensional space is anisotropic, having different properties in vertical versus horizontal dimensions. Three broad categories organize the experimental and theoretical issues raised by the commentators: (1) frames of reference, (2) comparative cognition, and (3) the role of experience. These categories contain the core of a research program to show how three-dimensional space is represented and used by humans and other animals.
Collapse
|
109
|
Guinet M, Michel C. Prism adaptation and neck muscle vibration in healthy individuals: are two methods better than one? Neuroscience 2013; 254:443-51. [PMID: 24035829 DOI: 10.1016/j.neuroscience.2013.08.067] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2013] [Revised: 08/29/2013] [Accepted: 08/29/2013] [Indexed: 11/18/2022]
Abstract
Studies involving therapeutic combinations reveal an important benefit in the rehabilitation of neglect patients when compared to single therapies. In light of these observations our present work examines, in healthy individuals, sensorimotor and cognitive after-effects of prism adaptation and neck muscle vibration applied individually or simultaneously. We explored sensorimotor after-effects on visuo-manual open-loop pointing, visual and proprioceptive straight-ahead estimations. We assessed cognitive after-effects on the line bisection task. Fifty-four healthy participants were divided into six groups designated according to the exposure procedure used with each: 'Prism' (P) group; 'Vibration with a sensation of body rotation' (Vb) group; 'Vibration with a move illusion of the LED' (Vl) group; 'Association with a sensation of body rotation' (Ab) group; 'Association with a move illusion of the LED' (Al) group; and 'Control' (C) group. The main findings showed that prism adaptation applied alone or combined with vibration showed significant adaptation in visuo-manual open-loop pointing, visual straight-ahead and proprioceptive straight-ahead. Vibration alone produced significant after-effects on proprioceptive straight-ahead estimation in the Vl group. Furthermore all groups (except C group) showed a rightward neglect-like bias in line bisection following the training procedure. This is the first demonstration of cognitive after-effects following neck muscle vibration in healthy individuals. The simultaneous application of both methods did not produce significant greater after-effects than prism adaptation alone in both sensorimotor and cognitive tasks. These results are discussed in terms of transfer of sensorimotor plasticity to spatial cognition in healthy individuals.
Collapse
Affiliation(s)
- M Guinet
- Université de Bourgogne, Campus Universitaire, UFR STAPS, BP 27877, Dijon F-21078, France; INSERM, U 1093, Cognition, Action et Plasticité sensorimotrice, Dijon F-21078, France
| | | |
Collapse
|
110
|
Functional and structural architecture of the human dorsal frontoparietal attention network. Proc Natl Acad Sci U S A 2013; 110:15806-11. [PMID: 24019489 DOI: 10.1073/pnas.1313903110] [Citation(s) in RCA: 136] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The dorsal frontoparietal attention network has been subdivided into at least eight areas in humans. However, the circuitry linking these areas and the functions of different circuit paths remain unclear. Using a combination of neuroimaging techniques to map spatial representations in frontoparietal areas, their functional interactions, and structural connections, we demonstrate different pathways across human dorsal frontoparietal cortex for the control of spatial attention. Our results are consistent with these pathways computing object-centered and/or viewer-centered representations of attentional priorities depending on task requirements. Our findings provide an organizing principle for the frontoparietal attention network, where distinct pathways between frontal and parietal regions contribute to multiple spatial representations, enabling flexible selection of behaviorally relevant information.
Collapse
|
111
|
Chang SWC. Coordinate transformation approach to social interactions. Front Neurosci 2013; 7:147. [PMID: 23970850 PMCID: PMC3748418 DOI: 10.3389/fnins.2013.00147] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Accepted: 08/01/2013] [Indexed: 01/25/2023] Open
Abstract
A coordinate transformation framework for understanding how neurons compute sensorimotor behaviors has generated significant advances toward our understanding of basic brain function. This influential scaffold focuses on neuronal encoding of spatial information represented in different coordinate systems (e.g., eye-centered, hand-centered) and how multiple brain regions partake in transforming these signals in order to ultimately generate a motor output. A powerful analogy can be drawn from the coordinate transformation framework to better elucidate how the nervous system computes cognitive variables for social behavior. Of particular relevance is how the brain represents information with respect to oneself and other individuals, such as in reward outcome assignment during social exchanges, in order to influence social decisions. In this article, I outline how the coordinate transformation framework can help guide our understanding of neural computations resulting in social interactions. Implications for numerous psychiatric disorders with impaired representations of self and others are also discussed.
Collapse
Affiliation(s)
- Steve W C Chang
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University Durham, NC, USA ; Department of Psychology, Yale University New Haven, CT, USA
| |
Collapse
|
112
|
Pickup LC, Fitzgibbon AW, Glennerster A. Modelling human visual navigation using multi-view scene reconstruction. BIOLOGICAL CYBERNETICS 2013; 107:449-464. [PMID: 23778937 PMCID: PMC3755223 DOI: 10.1007/s00422-013-0558-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2012] [Accepted: 05/08/2013] [Indexed: 06/02/2023]
Abstract
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer's prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Collapse
Affiliation(s)
- Lyndsey C. Pickup
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL UK
| | | | - Andrew Glennerster
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL UK
| |
Collapse
|
113
|
Short-lived effects of a visual inducer during egocentric space perception and manual behavior. Atten Percept Psychophys 2013; 75:1012-26. [PMID: 23653410 DOI: 10.3758/s13414-013-0455-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A pitched visual inducer has a strong effect on the visually perceived elevation of a target in extrapersonal space, and also on the elevation of the arm when a subject points with an unseen arm to the target's elevation. The manual effect is a systematic function of hand-to-body distance (Li and Matin Vision Research 45:533-550, 2005): When the arm is fully extended, manual responses to perceptually mislocalized luminous targets are veridical; when the arm is close to the body, gross matching errors occur. In the present experiments, we measured this hand-to-body distance effect during the presence of a pitched visual inducer and after inducer offset, using three values of hand-to-body distance (0, 40, and 70 cm) and two open-loop tasks (pointing to the perceived elevation of a target at true eye level and setting the height of the arm to match the elevation). We also measured manual behavior when subjects were instructed to point horizontally under induction and after inducer offset (no visual target at any time). In all cases, the hand-to-body distance effect disappeared shortly after inducer offset. We suggest that the rapid disappearance of the distance effect is a manifestation of processes in the dorsal visual stream that are involved in updating short-lived representations of the arm in egocentric visual perception and manual behavior.
Collapse
|
114
|
Ptak R, Fellrath J. Spatial neglect and the neural coding of attentional priority. Neurosci Biobehav Rev 2013; 37:705-22. [DOI: 10.1016/j.neubiorev.2013.01.026] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2012] [Revised: 12/11/2012] [Accepted: 01/28/2013] [Indexed: 11/27/2022]
|
115
|
Nardo D, Santangelo V, Macaluso E. Spatial orienting in complex audiovisual environments. Hum Brain Mapp 2013; 35:1597-614. [PMID: 23616340 DOI: 10.1002/hbm.22276] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Revised: 01/22/2013] [Accepted: 02/07/2013] [Indexed: 11/11/2022] Open
Abstract
Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality.
Collapse
Affiliation(s)
- Davide Nardo
- Neuroimaging Laboratory, Santa Lucia Foundation, Rome, Italy
| | | | | |
Collapse
|
116
|
Getting lost in Alzheimer’s disease: A break in the mental frame syncing. Med Hypotheses 2013; 80:416-21. [DOI: 10.1016/j.mehy.2012.12.031] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2012] [Revised: 12/04/2012] [Accepted: 12/29/2012] [Indexed: 11/22/2022]
|
117
|
Rochefort C, Lefort JM, Rondi-Reig L. The cerebellum: a new key structure in the navigation system. Front Neural Circuits 2013; 7:35. [PMID: 23493515 PMCID: PMC3595517 DOI: 10.3389/fncir.2013.00035] [Citation(s) in RCA: 100] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2012] [Accepted: 02/22/2013] [Indexed: 12/03/2022] Open
Abstract
Early investigations of cerebellar function focused on motor learning, in particular on eyeblink conditioning and adaptation of the vestibulo-ocular reflex, and led to the general view that cerebellar long-term depression (LTD) at parallel fiber (PF)–Purkinje cell (PC) synapses is the neural correlate of cerebellar motor learning. Thereafter, while the full complexity of cerebellar plasticities was being unraveled, cerebellar involvement in more cognitive tasks—including spatial navigation—was further investigated. However, cerebellar implication in spatial navigation remains a matter of debate because motor deficits frequently associated with cerebellar damage often prevent the dissociation between its role in spatial cognition from its implication in motor function. Here, we review recent findings from behavioral and electrophysiological analyses of cerebellar mutant mouse models, which show that the cerebellum might participate in the construction of hippocampal spatial representation map (i.e., place cells) and thereby in goal-directed navigation. These recent advances in cerebellar research point toward a model in which computation from the cerebellum could be required for spatial representation and would involve the integration of multi-source self-motion information to: (1) transform the reference frame of vestibular signals and (2) distinguish between self- and externally-generated vestibular signals. We eventually present herein anatomical and functional connectivity data supporting a cerebello-hippocampal interaction. Whilst a direct cerebello-hippocampal projection has been suggested, recent investigations rather favor a multi-synaptic pathway involving posterior parietal and retrosplenial cortices, two regions critically involved in spatial navigation.
Collapse
|
118
|
Xu BY, Karachi C, Goldberg ME. The postsaccadic unreliability of gain fields renders it unlikely that the motor system can use them to calculate target position in space. Neuron 2013; 76:1201-9. [PMID: 23259954 DOI: 10.1016/j.neuron.2012.10.034] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/31/2012] [Indexed: 11/27/2022]
Abstract
Gain fields, the eye-position modulation of visual responses, are thought to provide a mechanism by which the motor system can accurately calculate target position in space despite a constantly moving eye. Current gain-field models assume that the modulation of visual responses by eye position is accurate at all times, even around the time of a saccade. Here, we show that for at least 150 ms after a saccade, gain fields in the lateral intraparietal area (LIP) are unreliable. The majority of LIP cells with steady-state gain fields reflect the presaccadic eye position. The remainder of the cells have responses that cannot be predicted by their steady-state gain fields. Nonetheless, a monkey's oculomotor performance is accurate during this time. These results suggest that current models built upon a simple gain-field algorithm cannot be used to calculate the position of a target in space that flashes briefly after a saccade.
Collapse
Affiliation(s)
- Benjamin Y Xu
- Mahoney-Keck Center for Brain and Behavior Research, Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA.
| | | | | |
Collapse
|
119
|
Bufalari I, Di Russo F, Aglioti SM. Illusory and veridical mapping of tactile objects in the primary somatosensory and posterior parietal cortex. ACTA ACUST UNITED AC 2013; 24:1867-78. [PMID: 23438449 DOI: 10.1093/cercor/bht037] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
While several behavioral and neuroscience studies have explored visual, auditory, and cross-modal illusions, information about the phenomenology and neural correlates of somatosensory illusions is meager. By combining psychophysics and somatosensory evoked potentials, we explored in healthy humans the neural correlates of 2 compelling tactuo-proprioceptive illusions, namely Aristotle (1 object touching the contact area between 2 crossed fingers is perceived as 2 lateral objects) and Reverse illusions (2 lateral objects are perceived as 1 between crossed-fingers object). These illusions likely occur because of the tactuo-proprioceptive conflict induced by fingers being crossed in a non-natural posture. We found that different regions in the somatosensory stream exhibit different proneness to the illusions. Early electroencephalographic somatosensory activity (at 20 ms) originating in the primary somatosensory cortex (S1) reflects the phenomenal rather than the physical properties of the stimuli. Notably, later activity (around 200 ms) originating in the posterior parietal cortex is higher when subjects resist the illusions. Thus, while S1 activity is related to illusory perception, PPC acts as a conflict resolver that recodes tactile events from somatotopic to spatiotopic frames of reference and ultimately enables veridical perception.
Collapse
Affiliation(s)
- Ilaria Bufalari
- Dipartimento di Psicologia, Università degli Studi di Roma "La Sapienza", I-00185 Rome, Italy, Laboratorio di Neuroscienze Sociali
| | - Francesco Di Russo
- Centro Ricerche Neuropsicologia, IRCCS Fondazione Santa Lucia, I-00179 Rome, Italy and Dipartimento di Scienze Motorie, Umane e della Salute, Università degli Studi di Roma "Foro Italico", I-00135 Rome, Italy
| | - Salvatore Maria Aglioti
- Dipartimento di Psicologia, Università degli Studi di Roma "La Sapienza", I-00185 Rome, Italy, Laboratorio di Neuroscienze Sociali
| |
Collapse
|
120
|
Chafee MV, Crowe DA. Thinking in spatial terms: decoupling spatial representation from sensorimotor control in monkey posterior parietal areas 7a and LIP. Front Integr Neurosci 2013; 6:112. [PMID: 23355813 PMCID: PMC3555036 DOI: 10.3389/fnint.2012.00112] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Accepted: 11/05/2012] [Indexed: 11/24/2022] Open
Abstract
Perhaps the simplest and most complete description of the cerebral cortex is that it is a sensorimotor controller whose primary purpose is to represent stimuli and movements, and adaptively control the mapping between them. However, in order to think, the cerebral cortex has to generate patterns of neuronal activity that encode abstract, generalized information independently of ongoing sensorimotor events. A critical question confronting cognitive systems neuroscience at present therefore is how neural signals encoding abstract information emerge within the sensorimotor control networks of the brain. In this review, we approach that question in the context of the neural representation of space in posterior parietal cortex of non-human primates. We describe evidence indicating that parietal cortex generates a hierarchy of spatial representations with three basic levels: including (1) sensorimotor signals that are tightly coupled to stimuli or movements, (2) sensorimotor signals modified in strength or timing to mediate cognition (examples include attention, working memory, and decision-processing), as well as (3) signals that encode frankly abstract spatial information (such as spatial relationships or categories) generalizing across a wide diversity of specific stimulus conditions. Here we summarize the evidence for this hierarchy, and consider data showing that signals at higher levels derive from signals at lower levels. That in turn could help characterize neural mechanisms that derive a capacity for abstraction from sensorimotor experience.
Collapse
Affiliation(s)
- Matthew V Chafee
- Department of Neuroscience, University of Minnesota Medical School Minneapolis, MN, USA ; Brain Sciences Center, VA Medical Center Minneapolis, MN, USA ; Center for Cognitive Sciences, University of Minnesota Minneapolis, MN, USA
| | | |
Collapse
|
121
|
Golomb JD, Kanwisher N. Higher level visual cortex represents retinotopic, not spatiotopic, object location. Cereb Cortex 2012; 22:2794-810. [PMID: 22190434 PMCID: PMC3491766 DOI: 10.1093/cercor/bhr357] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex-important for stable object recognition and action-contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a "searchlight" analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates.
Collapse
Affiliation(s)
- Julie D Golomb
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | |
Collapse
|
122
|
Binetti N, Siegler IA, Bueti D, Doricchi F. Adaptive tuning of perceptual timing to whole body motion. Neuropsychologia 2012; 51:197-210. [PMID: 23142351 DOI: 10.1016/j.neuropsychologia.2012.10.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2011] [Revised: 10/09/2012] [Accepted: 10/12/2012] [Indexed: 11/16/2022]
Abstract
In a previous work we have shown that sinusoidal whole-body rotations producing continuous vestibular stimulation, affected the timing of motor responses as assessed with a paced finger tapping (PFT) task (Binetti et al. (2010). Neuropsychologia, 48(6), 1842-1852). Here, in two new psychophysical experiments, one purely perceptual and one with both sensory and motor components, we explored the relationship between body motion/vestibular stimulation and perceived timing of acoustic events. In experiment 1, participants were required to discriminate sequences of acoustic tones endowed with different degrees of acceleration or deceleration. In this experiment we found that a tone sequence presented during acceleratory whole-body rotations required a progressive increase in rate in order to be considered temporally regular, consistent with the idea of an increase in "clock" frequency and of an overestimation of time. In experiment 2 participants produced self-paced taps, which entailed an acoustic feedback. We found that tapping frequency in this task was affected by periodic motion by means of anticipatory and congruent (in-phase) fluctuations irrespective of the self-generated sensory feedback. On the other hand, synchronizing taps to an external rhythm determined a completely opposite modulation (delayed/counter-phase). Overall this study shows that body displacements "remap" our metric of time, affecting not only motor output but also sensory input.
Collapse
|
123
|
Brayanov JB, Press DZ, Smith MA. Motor memory is encoded as a gain-field combination of intrinsic and extrinsic action representations. J Neurosci 2012; 32:14951-65. [PMID: 23100418 PMCID: PMC3999415 DOI: 10.1523/jneurosci.1928-12.2012] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Revised: 07/28/2012] [Accepted: 08/24/2012] [Indexed: 11/21/2022] Open
Abstract
Actions can be planned in either an intrinsic (body-based) reference frame or an extrinsic (world-based) frame, and understanding how the internal representations associated with these frames contribute to the learning of motor actions is a key issue in motor control. We studied the internal representation of this learning in human subjects by analyzing generalization patterns across an array of different movement directions and workspaces after training a visuomotor rotation in a single movement direction in one workspace. This provided a dense sampling of the generalization function across intrinsic and extrinsic reference frames, which allowed us to dissociate intrinsic and extrinsic representations and determine the manner in which they contributed to the motor memory for a trained action. A first experiment showed that the generalization pattern reflected a memory that was intermediate between intrinsic and extrinsic representations. A second experiment showed that this intermediate representation could not arise from separate intrinsic and extrinsic learning. Instead, we find that the representation of learning is based on a gain-field combination of local representations in intrinsic and extrinsic coordinates. This gain-field representation generalizes between actions by effectively computing similarity based on the (Mahalanobis) distance across intrinsic and extrinsic coordinates and is in line with neural recordings showing mixed intrinsic-extrinsic representations in motor and parietal cortices.
Collapse
Affiliation(s)
| | - Daniel Z. Press
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, Massachusetts 02215
| | - Maurice A. Smith
- School of Engineering and Applied Sciences and
- Center for Brain Science, Harvard University, Boston, Massachusetts 02138, and
| |
Collapse
|
124
|
Bird CM, Bisby JA, Burgess N. The hippocampus and spatial constraints on mental imagery. Front Hum Neurosci 2012; 6:142. [PMID: 22629242 PMCID: PMC3354615 DOI: 10.3389/fnhum.2012.00142] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2011] [Accepted: 05/02/2012] [Indexed: 11/22/2022] Open
Abstract
We review a model of imagery and memory retrieval based on allocentric spatial representation by place cells and boundary vector cells (BVCs) in the medial temporal lobe, and their translation into egocentric images in retrosplenial and parietal areas. In this model, the activity of place cells constrain the contents of imagery and retrieval to be coherent and consistent with the subject occupying a single location, while the activity of head-direction cells along Papez's circuit determine the viewpoint direction for which the egocentric image is generated. An extension of this model is discussed in which a role for grid cells in dynamic updating of representations (mental navigation) is included. We also discuss the extension of this model to implement a version of the dual representation theory of post-traumatic stress disorder (PTSD) in which PTSD arises from an imbalance between weak allocentric hippocampal-mediated contextual representations and strong affective/sensory representations. The implications of these models for behavioral, neuropsychological, and neuroimaging data in humans are explored.
Collapse
Affiliation(s)
- Chris M Bird
- School of Psychology, University of Sussex Brighton, UK
| | | | | |
Collapse
|
125
|
Chapman BB, Pace MA, Cushing SL, Corneil BD. Recruitment of a contralateral head turning synergy by stimulation of monkey supplementary eye fields. J Neurophysiol 2012; 107:1694-710. [DOI: 10.1152/jn.00487.2011] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The supplementary eye fields (SEF) are thought to enable higher-level aspects of oculomotor control. The goal of the present experiment was to learn more about the SEF's role in orienting, specifically by examining neck muscle recruitment evoked by stimulation of the SEF. Neck muscle activity was recorded from multiple muscles in two monkeys during SEF stimulation (100 μA, 150–300 ms, 300 Hz, with the head restrained or unrestrained) delivered 200 ms into a gap period, before a visually guided saccade initiated from a central position (doing so avoids confounds between initial position and prestimulation neck muscle activity). SEF stimulation occasionally evoked overt gaze shifts and/or head movements but almost always evoked a response that invariably consisted of a contralateral head turning synergy by increasing activity on contralateral turning muscles and decreasing activity on ipsilateral turning muscles (when background activity was present). Neck muscle responses began well in advance of evoked gaze shifts (∼30 ms after stimulation onset, leading gaze shifts by ∼40–70 ms on average), started earlier and attained a larger magnitude when accompanied by progressively larger gaze shifts, and persisted on trials without overt gaze shifts. The patterns of evoked neck muscle responses resembled those evoked by frontal eye field (FEF) stimulation, except that response latencies from the SEF were ∼10 ms longer. This basic description of the cephalomotor command evoked by SEF stimulation suggests that this structure, while further removed from the motor periphery than the FEF, accesses premotor orienting circuits in the brain stem and spinal cord in a similar manner.
Collapse
Affiliation(s)
| | | | - Sharon L. Cushing
- Department of Otolaryngology-Head and Neck Surgery, Hospital for Sick Children, University of Toronto, Toronto; and
| | - Brian D. Corneil
- Graduate Program in Neuroscience and
- Departments of 2Physiology and Pharmacology and
- Psychology, University of Western Ontario, London
- Centre for Brain and Mind, Robarts Research Institute, London, Ontario, Canada
| |
Collapse
|
126
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
127
|
The role of visual experience for the neural basis of spatial cognition. Neurosci Biobehav Rev 2012; 36:1179-87. [PMID: 22330729 DOI: 10.1016/j.neubiorev.2012.01.008] [Citation(s) in RCA: 100] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2011] [Revised: 01/16/2012] [Accepted: 01/28/2012] [Indexed: 12/20/2022]
Abstract
Blindness often results in the adaptive neural reorganization of the remaining modalities, producing sharper auditory and haptic behavioral performance. Yet, non-visual modalities might not be able to fully compensate for the lack of visual experience as in the case of congenital blindness. For example, developmental visual experience seems to be necessary for the maturation of multisensory neurons for spatial tasks. Additionally, the ability of vision to convey information in parallel might be taken into account as the main attribute that cannot be fully compensated by the spared modalities. Therefore, the lack of visual experience might impair all spatial tasks that require the integration of inputs from different modalities, such as having to represent a set of objects on the basis of the spatial relationships among the objects, rather than the spatial relationship that each object has with oneself. Here we integrate behavioral and neural evidence to conclude that visual experience is necessary for the neural development of normal spatial cognition.
Collapse
|
128
|
Schneegans S, Schöner G. A neural mechanism for coordinate transformation predicts pre-saccadic remapping. BIOLOGICAL CYBERNETICS 2012; 106:89-109. [PMID: 22481644 DOI: 10.1007/s00422-012-0484-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2011] [Accepted: 03/13/2012] [Indexed: 05/06/2023]
Abstract
Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.
Collapse
|
129
|
Decoding effector-dependent and effector-independent movement intentions from human parieto-frontal brain activity. J Neurosci 2012; 31:17149-68. [PMID: 22114283 DOI: 10.1523/jneurosci.1058-11.2011] [Citation(s) in RCA: 125] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Our present understanding of the neural mechanisms and sensorimotor transformations that govern the planning of arm and eye movements predominantly come from invasive parieto-frontal neural recordings in nonhuman primates. While functional MRI (fMRI) has motivated investigations on much of these same issues in humans, the highly distributed and multiplexed organization of parieto-frontal neurons necessarily constrain the types of intention-related signals that can be detected with traditional fMRI analysis techniques. Here we employed multivoxel pattern analysis (MVPA), a multivariate technique sensitive to spatially distributed fMRI patterns, to provide a more detailed understanding of how hand and eye movement plans are coded in human parieto-frontal cortex. Subjects performed an event-related delayed movement task requiring that a reach or saccade be planned and executed toward one of two spatial target positions. We show with MVPA that, even in the absence of signal amplitude differences, the fMRI spatial activity patterns preceding movement onset are predictive of upcoming reaches and saccades and their intended directions. Within certain parieto-frontal regions we show that these predictive activity patterns reflect a similar spatial target representation for the hand and eye. Within some of the same regions, we further demonstrate that these preparatory spatial signals can be discriminated from nonspatial, effector-specific signals. In contrast to the largely graded effector- and direction-related planning responses found with fMRI subtraction methods, these results reveal considerable consensus with the parieto-frontal network organization suggested from primate neurophysiology and specifically show how predictive spatial and nonspatial movement information coexists within single human parieto-frontal areas.
Collapse
|
130
|
Yoder RM, Clark BJ, Taube JS. Origins of landmark encoding in the brain. Trends Neurosci 2011; 34:561-71. [PMID: 21982585 PMCID: PMC3200508 DOI: 10.1016/j.tins.2011.08.004] [Citation(s) in RCA: 102] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Revised: 05/16/2011] [Accepted: 08/22/2011] [Indexed: 11/24/2022]
Abstract
The ability to perceive one's position and directional heading relative to landmarks is necessary for successful navigation within an environment. Recent studies have shown that the visual system dominantly controls the neural representations of directional heading and location when familiar visual cues are available, and several neural circuits, or streams, have been proposed to be crucial for visual information processing. Here, we summarize the evidence that the dorsal presubiculum (also known as the postsubiculum) is critically important for the direct transfer of visual landmark information to spatial signals within the limbic system.
Collapse
Affiliation(s)
| | | | - Jeffrey S. Taube
- Department of Psychological and Brain Sciences, Center for Cognitive Neuroscience, Dartmouth College
| |
Collapse
|
131
|
Li P, Abarbanell L, Gleitman L, Papafragou A. Spatial reasoning in Tenejapan Mayans. Cognition 2011; 120:33-53. [PMID: 21481854 PMCID: PMC3095761 DOI: 10.1016/j.cognition.2011.02.012] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2010] [Revised: 02/09/2011] [Accepted: 02/11/2011] [Indexed: 01/29/2023]
Abstract
Language communities differ in their stock of reference frames (coordinate systems for specifying locations and directions). English typically uses egocentrically-defined axes (e.g., "left-right"), especially when describing small-scale relationships. Other languages such as Tseltal Mayan prefer to use geocentrically-defined axes (e.g., "north-south") and do not use any type of projective body-defined axes. It has been argued that the availability of specific frames of reference in language determines the availability or salience of the corresponding spatial concepts. In four experiments, we explored this hypothesis by testing Tseltal speakers' spatial reasoning skills. Whereas most prior tasks in this domain were open-ended (allowing several correct solutions), the present tasks required a unique solution that favored adopting a frame-of-reference that was either congruent or incongruent with what is habitually lexicalized in the participants' language. In these tasks, Tseltal speakers easily solved the language-incongruent problems, and performance was generally more robust for these than for the language-congruent problems that favored geocentrically-defined coordinates. We suggest that listeners' probabilistic inferences when instruction is open to more than one interpretation account for why there are greater cross-linguistic differences in the solutions to open-ended spatial problems than to less ambiguous ones.
Collapse
Affiliation(s)
- Peggy Li
- Laboratory for Developmental Studies, Harvard University, 25 Francis Ave., Cambridge, MA 02138, USA.
| | | | | | | |
Collapse
|
132
|
St-Laurent M, Abdi H, Burianová H, Grady CL. Influence of aging on the neural correlates of autobiographical, episodic, and semantic memory retrieval. J Cogn Neurosci 2011; 23:4150-63. [PMID: 21671743 DOI: 10.1162/jocn_a_00079] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
We used fMRI to assess the neural correlates of autobiographical, semantic, and episodic memory retrieval in healthy young and older adults. Participants were tested with an event-related paradigm in which retrieval demand was the only factor varying between trials. A spatio-temporal partial least square analysis was conducted to identify the main patterns of activity characterizing the groups across conditions. We identified brain regions activated by all three memory conditions relative to a control condition. This pattern was expressed equally in both age groups and replicated previous findings obtained in a separate group of younger adults. We also identified regions whose activity differentiated among the different memory conditions. These patterns of differentiation were expressed less strongly in the older adults than in the young adults, a finding that was further confirmed by a barycentric discriminant analysis. This analysis showed an age-related dedifferentiation in autobiographical and episodic memory tasks but not in the semantic memory task or the control condition. These findings suggest that the activation of a common memory retrieval network is maintained with age, whereas the specific aspects of brain activity that differ with memory content are more vulnerable and less selectively engaged in older adults. Our results provide a potential neural mechanism for the well-known age differences in episodic/autobiographical memory, and preserved semantic memory, observed when older adults are compared with younger adults.
Collapse
Affiliation(s)
- Marie St-Laurent
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON, Canada.
| | | | | | | |
Collapse
|
133
|
Tziridis K, Dicke PW, Thier P. Pontine reference frames for the sensory guidance of movement. Cereb Cortex 2011; 22:345-62. [PMID: 21670098 DOI: 10.1093/cercor/bhr109] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The pontine nuclei (PN) are the major intermediary elements in the corticopontocerebellar pathway. Here we asked if the PN may help to adapt the spatial reference frames used by cerebrocortical neurons involved in the sensory guidance of movement to a format potentially more appropriate for the cerebellum. To this end, we studied movement-related neurons in the dorsal PN (DPN) of monkeys, most probably projecting to the cerebellum, executing fixed vector saccades or, alternatively, fixed vector hand reaches from different starting positions. The 83 task-related neurons considered fired movement-related bursts before saccades (saccade-related) or before hand movements (hand movement-related). About 40% of the SR neurons were "oculocentric," whereas the others were modulated by eye starting position. A third of the HMR neurons encoded hand reaches in hand-centered coordinates, whereas the remainder exhibited different types of dependencies on starting positions, reminiscent in general of cortical responses. All in all, pontine reference frames for the sensory guidance of movement seem to be very similar to those in cortex. Specifically, the frequency of orbital position gain fields of SR neurons is identical in the DPN and in one of their major cortical inputs, lateral intraparietal area (LIP).
Collapse
Affiliation(s)
- Konstantin Tziridis
- Department for Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Otfried-Müller-Strasse 27, Tübingen 72076, Germany
| | | | | |
Collapse
|
134
|
Kaas JH, Gharbawie OA, Stepniewska I. The organization and evolution of dorsal stream multisensory motor pathways in primates. Front Neuroanat 2011; 5:34. [PMID: 21716641 PMCID: PMC3116136 DOI: 10.3389/fnana.2011.00034] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2011] [Accepted: 06/01/2011] [Indexed: 11/13/2022] Open
Abstract
In Prosimian primates, New World monkeys, and Old World monkeys microstimulation with half second trains of electrical pulses identifies separate zones in posterior parietal cortex (PPC) where reaching, defensive, grasping, and other complex movements can be evoked. Each functional zone receives a different pattern of visual and somatosensory inputs, and projects preferentially to functionally matched parts of motor and premotor cortex. As PPC is a relatively small portion of cortex in most mammals, including the close relatives of primates, we suggest that a larger, more significant PPC emerged with the first primates as a region where several ethologically relevant behaviors could be initiated by sensory and intrinsic signals, and mediated via connections with premotor and motor cortex. While several classes of PPC modules appear to be retained by all primates, elaboration and differentiation of these modules likely occurred in some primates, especially humans.
Collapse
Affiliation(s)
- Jon H Kaas
- Department of Psychology, Vanderbilt University Nashville, TN, USA
| | | | | |
Collapse
|
135
|
Ptak R. The frontoparietal attention network of the human brain: action, saliency, and a priority map of the environment. Neuroscientist 2011; 18:502-15. [PMID: 21636849 DOI: 10.1177/1073858411409051] [Citation(s) in RCA: 385] [Impact Index Per Article: 29.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The dorsal convexity of the human frontal and parietal lobes forms a network that is crucially involved in the selection of sensory contents by attention. This network comprehends cortex along the intraparietal sulcus, the inferior parietal lobe, and dorsal premotor cortex, including the frontal eye field. These regions are richly interconnected with recurrent fibers passing through the superior longitudinal fasciculus. The posterior parietal cortex has several functional characteristics-such as feature-independent coding, enhancement of activity by attention, representation of task-related signals, and access to multiple reference frames-that point to a central role of this region in the computation of a feature- and modality-independent priority map of the environment. The priority map integrates feature information elaborated in sensory cortex and top-down representations of behavioral goals and expectations originating in the dorsolateral prefrontal and premotor cortex. This review presents converging evidence from single-unit studies of the primate brain, functional neuroimaging, and investigations of neuropsychological disorders such as Bálint syndrome and spatial neglect for a decisive role of the frontoparietal attention network in the selection of relevant environmental information.
Collapse
Affiliation(s)
- Radek Ptak
- Division of Neurorehabilitation, Geneva University Hospital, University of Geneva, Switzerland.
| |
Collapse
|
136
|
Abstract
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
Collapse
|
137
|
Abstract
Planning spatial paths through our environment is an important part of everyday life and is supported by a neural system including the hippocampus and prefrontal cortex. Here we investigated the precise functional roles of the components of this system in humans by using fMRI as participants performed a simple goal-directed route-planning task. Participants had to choose the shorter of two routes to a goal in a visual scene that might contain a barrier blocking the most direct route, requiring a detour, or might be obscured by a curtain, requiring memory for the scene. The participant's start position was varied to parametrically manipulate their proximity to the goal and the difference in length of the two routes. Activity in medial prefrontal cortex, precuneus, and left posterior parietal cortex was associated with detour planning, regardless of difficulty, whereas activity in parahippocampal gyrus was associated with remembering the spatial layout of the visual scene. Activity in bilateral anterior hippocampal formation showed a strong increase the closer the start position was to the goal, together with medial prefrontal, medial and posterior parietal cortices. Our results are consistent with computational models in which goal proximity is used to guide subsequent navigation and with the association of anterior hippocampal areas with nonspatial functions such as arousal and reward expectancy. They illustrate how spatial and nonspatial functions combine within the anterior hippocampus, and how these functions interact with parahippocampal, parietal, and prefrontal areas in decision making and mnemonic function.
Collapse
|
138
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
139
|
Abstract
Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down.
Collapse
Affiliation(s)
- David Melcher
- Faculty of Cognitive Science, University of Trento, Italy.
| |
Collapse
|
140
|
Zimmermann E, Lappe M. Eye position effects in oculomotor plasticity and visual localization. J Neurosci 2011; 31:7341-8. [PMID: 21593318 PMCID: PMC6622596 DOI: 10.1523/jneurosci.6112-10.2011] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2010] [Revised: 02/23/2011] [Accepted: 04/07/2011] [Indexed: 11/21/2022] Open
Abstract
For visual localization to remain accurate across changes of gaze, a signal representing the position of the eye in the orbita is needed to code spatial locations in a reference frame that is independent of retinal displacements. Here we report evidence that the localization of visual objects in space is coded in an extraretinal reference frame. In human subjects, we used outward saccadic adaptation, which can be induced artificially by a systematic displacement of the saccade target. This form of oculomotor plasticity is accompanied by changes in spatial perception, thus highlighting the relevance of saccade metrics for visual localization. We tested the reference frame of outward adaptation for reactive and scanning saccades and visual localization. For scanning saccades, adaptation magnitude was drastically reduced at positions distant from the adapted eye position. Changes in visual localization showed a very similar modulation of eye position. These results suggest that scanning saccade adaptation is encoded in a nonretinotopic reference frame. Eye position effects for reactive saccade adaptation were smaller, and the induced mislocalization did not vary significantly between eye positions. The different modulation of reactive and scanning saccade adaptation supports the idea that oculomotor plasticity can occur at multiple sites in the brain. The findings are also consistent with previous evidence for a stronger influence of scanning saccade adaptation on the visual localization of objects in space.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Department of Psychology, Università degli Studi di Firenze, 50135 Florence, Italy.
| | | |
Collapse
|
141
|
Neural correlates of binding features within- or cross-dimensions in visual conjunction search: an fMRI study. Neuroimage 2011; 57:235-241. [PMID: 21539923 DOI: 10.1016/j.neuroimage.2011.04.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2010] [Revised: 04/08/2011] [Accepted: 04/12/2011] [Indexed: 11/23/2022] Open
Abstract
The fMRI technique was used to investigate the functional neuroanatomy of binding features within- or cross-dimension during visual conjunction search. Participants were asked to perform feature search (FS; e.g., search for a vertical bar among tilted bars), within-dimension search (WS; e.g., search for an upright T among non-target oriented Ts and Ls), cross-dimension search (CS; e.g., search for an orange vertical bar among blue vertical bars and orange tilted bars), and complex search combining within- and cross-dimension features (WCS; e.g., search for an orange upright T among orange leftward Ts and blue Ls). Reaction times (RTs) taken to decide whether a target was present or absent were faster in the FS than in the WS, CS, and WCS conditions, but did not differ between the latter three conditions. Neuroimaging results revealed a set of fronto-parietal regions, including frontal eye field and intraparietal sulcus, to be consistently activated in conjunction search (WS, CS, and WCS) relative to feature search, suggesting that these regions play a more prominent role in matching visual input against the target template in conjunction search. Furthermore, left occipito-temporal cortex was more activated in within-dimension conjunction search, and bilateral intraparietal sulci were more activated in cross-dimension conjunction search. This suggests that features from the same dimension are 'bound' at a higher stage of the ventral pathway by conjoining the inputs from lower-level neurons, whereas neurons along the intraparietal sulcus appear to be necessary for discerning the presence of cross-dimensional conjunctions.
Collapse
|
142
|
Xu Y, Wang X, Peck C, Goldberg ME. The time course of the tonic oculomotor proprioceptive signal in area 3a of somatosensory cortex. J Neurophysiol 2011; 106:71-7. [PMID: 21346201 DOI: 10.1152/jn.00668.2010] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A proprioceptive representation of eye position exists in area 3a of primate somatosensory cortex (Wang X, Zhang M, Cohen IS, Goldberg ME. Nat Neurosci 10: 640-646, 2007). This eye position signal is consistent with a fusimotor response (Taylor A, Durbaba R, Ellaway PH, Rawlinson S. J Physiol 571: 711-723, 2006) and has two components during a visually guided saccade task: a short-latency phasic response followed by a tonic response. While the early phasic response can be excitatory or inhibitory, it does not accurately reflect the eye's orbital position. The late tonic response appears to carry the proprioceptive eye position signal, but it is not clear when this component emerges and whether the onset of this signal is reliable. To test the temporal dynamics of the tonic proprioceptive signal, we used an oculomotor smooth pursuit task in which saccadic eye movements and phasic proprioceptive responses are suppressed. Our results show that the tonic proprioceptive eye position signal consistently lags the actual eye position in the orbit by ~60 ms under a variety of eye movement conditions. To confirm the proprioceptive nature of this signal, we also studied the responses of neurons in a vestibuloocular reflex (VOR) task in which the direction of gaze was held constant; response profiles and delay times were similar in this task, suggesting that this signal does not represent angle of gaze and does not receive visual or vestibular inputs. The length of the delay suggests that the proprioceptive eye position signal is unlikely to be used for online visual processing for action, although it could be used to calibrate an efference copy signal.
Collapse
Affiliation(s)
- Yixing Xu
- Mahoney-Keck Center for Brain and Behavior Research, Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, New York, USA
| | | | | | | |
Collapse
|
143
|
Abstract
We propose a concise novel conceptual and biological framework for the analysis of primary visual perception (PVP) that refers to the most basic levels of our awake subjective visual experiences. Neural representations for image content elaborated within V1/V2 and the early occipitotemporal (ventral) loop remain only latent with respect to PVP until spatially localized with respect to an attending observer. This process requires more than the downstream deployment of attentional resources onto targeted neurons. Additionally, the source neurons for such processes must be linked to a neural representation subserving a first-person perspective. We hypothesize that the simultaneous emergence of both the perceptual experience of image content and the personal inference of its ownership requires the resolution of any conflicting neuronal signaling between afferent and recurrent projections within and between both the ventral and dorsal streams. The V1/V2 complex and ventral cortical areas V3 and the V4 complex together with dorsal cortical areas LIP, VIP, and 7a with additional contributions from the motion areas V5/MT (middle temporal area), FST (fundus of superior temporal area), and MST (medial superior temporal area) together with their subcortical dependencies have the physiological properties required to constitute a "posterior perceptual core" that encodes the normal primary perceptual experience of image content, space, and sense of minimal self.
Collapse
Affiliation(s)
- Daniel A Pollen
- Department of Neurology, University of Massachusetts Medical School, Worcester, MA 01655, USA.
| |
Collapse
|
144
|
Golomb JD, Albrecht AR, Park S, Chun MM. Eye movements help link different views in scene-selective cortex. Cereb Cortex 2011; 21:2094-102. [PMID: 21282320 DOI: 10.1093/cercor/bhq292] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
To explore visual scenes in the everyday world, we constantly move our eyes, yet most neural studies of scene processing are conducted with the eyes held fixated. Such prior work in humans suggests that the parahippocampal place area (PPA) represents scenes in a highly specific manner that can differentiate between different but overlapping views of a panoramic scene. Using functional magnetic resonance imaging (fMRI) adaptation to measure sensitivity to change, we asked how this specificity is affected when active eye movements across a stable scene generate retinotopically different views. The PPA adapted to successive views when subjects made a series of saccades across a stationary spatiotopic scene but not when the eyes remained fixed and a scene translated in the background, suggesting that active vision may provide important cues for the PPA to integrate different views over time as the "same." Adaptation was also robust when retinotopic information was preserved across views when the scene moved in tandem with the eyes. These data suggest that retinotopic physical similarity is fundamental, but the visual system may also utilize oculomotor cues and/or global spatiotopic information to generate more ecologically relevant representations of scenes across different views.
Collapse
Affiliation(s)
- Julie D Golomb
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
145
|
Prevosto V, Graf W, Ugolini G. Proprioceptive pathways to posterior parietal areas MIP and LIPv from the dorsal column nuclei and the postcentral somatosensory cortex. Eur J Neurosci 2011; 33:444-60. [PMID: 21226771 DOI: 10.1111/j.1460-9568.2010.07541.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The posterior parietal cortex (PPC) serves as an interface between sensory and motor cortices by integrating multisensory signals with motor-related information. Sensorimotor transformation of somatosensory signals is crucial for the generation and updating of body representations and movement plans. Using retrograde transneuronal transfer of rabies virus in combination with a conventional tracer, we identified direct and polysynaptic somatosensory pathways to two posterior parietal areas, the ventral lateral intraparietal area (LIPv) and the rostral part of the medial intraparietal area (MIP) in macaque monkeys. In addition to direct projections from somatosensory areas 2v and 3a, respectively, we found that LIPv and MIP receive disynaptic inputs from the dorsal column nuclei as directly as these somatosensory areas, via a parallel channel. LIPv is the target of minor neck muscle-related projections from the cuneate (Cu) and the external cuneate nuclei (ECu), and direct projections from area 2v, that likely carry kinesthetic/vestibular/optokinetic-related signals. In contrast, MIP receives major arm and shoulder proprioceptive inputs disynaptically from the rostral Cu and ECu, and trisynaptically (via area 3a) from caudal portions of these nuclei. These findings have important implications for the understanding of the influence of proprioceptive information on movement control operations of the PPC and the formation of body representations. They also contribute to explain the specific deficits of proprioceptive guidance of movement associated to optic ataxia.
Collapse
Affiliation(s)
- Vincent Prevosto
- Laboratoire de Neurobiologie Cellulaire et Moléculaire (NBCM), FRE3295 CNRS, 91198 Gif sur Yvette, France
| | | | | |
Collapse
|
146
|
Marigold DS, Andujar JE, Lajoie K, Drew T. Chapter 6--motor planning of locomotor adaptations on the basis of vision: the role of the posterior parietal cortex. PROGRESS IN BRAIN RESEARCH 2011; 188:83-100. [PMID: 21333804 DOI: 10.1016/b978-0-444-53825-3.00011-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
In this chapter, we consider the contribution of the posterior parietal cortex (PPC) to obstacle avoidance behavior and we define a model that identifies the major planning processes that are required for this task. A key aspect of this planning process is the need to integrate information concerning the obstacle, obtained from vision, together with an estimation of body and limb state. We suggest that the PPC makes a major contribution to this process during visually guided locomotion. We present evidence from lesion and single unit recording experiments in the cat that are compatible with this viewpoint.
Collapse
Affiliation(s)
- Daniel S Marigold
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, British Columbia, Canada
| | | | | | | |
Collapse
|
147
|
Hoffmann M, Marques H, Arieta A, Sumioka H, Lungarella M, Pfeifer R. Body Schema in Robotics: A Review. ACTA ACUST UNITED AC 2010. [DOI: 10.1109/tamd.2010.2086454] [Citation(s) in RCA: 178] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
148
|
Spatial and non-spatial functions of the parietal cortex. Curr Opin Neurobiol 2010; 20:731-40. [PMID: 21050743 DOI: 10.1016/j.conb.2010.09.015] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2010] [Revised: 09/28/2010] [Accepted: 09/29/2010] [Indexed: 11/21/2022]
Abstract
Although the parietal cortex is traditionally associated with spatial attention and sensorimotor integration, recent evidence also implicates it in higher order cognitive functions. We review relevant results from neuron recording studies showing that inferior parietal neurons integrate information regarding target location with a variety of non-spatial signals. Some of these signals are modulatory and alter a stimulus-evoked response according to the action, category, or reward associated with the stimulus. Other non-spatial inputs act independently, encoding the context or rules of a task even before the presentation of a specific target. Despite the ubiquity of non-spatial information in individual neurons, reversible inactivation of the parietal lobe affects only spatial orienting of attention and gaze, but not non-spatial aspects of performance. This suggests that non-spatial signals contribute to an underlying spatial computation, possibly allowing the brain to determine which targets are worthy of attention or action in a given task context.
Collapse
|
149
|
Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. J Neurosci 2010; 30:10493-506. [PMID: 20685992 DOI: 10.1523/jneurosci.1546-10.2010] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
With each eye movement, the image of the world received by the visual system changes dramatically. To maintain stable spatiotopic (world-centered) visual representations, the retinotopic (eye-centered) coordinates of visual stimuli are continually remapped, even before the eye movement is completed. Recent psychophysical work has suggested that updating of attended locations occurs as well, although on a slower timescale, such that sustained attention lingers in retinotopic coordinates for several hundred milliseconds after each saccade. To explore where and when this "retinotopic attentional trace" resides in the cortical visual processing hierarchy, we conducted complementary functional magnetic resonance imaging and event-related potential (ERP) experiments using a novel gaze-contingent task. Human subjects executed visually guided saccades while covertly monitoring a fixed spatiotopic target location. Although subjects responded only to stimuli appearing at the attended spatiotopic location, blood oxygen level-dependent responses to stimuli appearing after the eye movement at the previously, but no longer, attended retinotopic location were enhanced in visual cortical area V4 and throughout visual cortex. This retinotopic attentional trace was also detectable with higher temporal resolution in the anterior N1 component of the ERP data, a well established signature of attentional modulation. Together, these results demonstrate that, when top-down spatiotopic signals act to redirect visuospatial attention to new retinotopic locations after eye movements, facilitation transiently persists in the cortical regions representing the previously relevant retinotopic location.
Collapse
|
150
|
Abstract
Repetitive experience with the same visual stimulus and task can remarkably improve behavioral performance on the task. This well-known perceptual-learning phenomenon is usually specific to the trained retinal- or visual-field location, which is taken as an indication of plastic changes in retinotopic visual areas. In previous studies of perceptual learning, however, a change in stimulus location on the retina is accompanied by positional changes of the stimulus in nonretinotopic frames of reference, such as relative to the head and other objects. It is unclear, therefore, whether the putative location specificity is exclusively retinotopic or if it could also depend on nonretinotopic representation of the stimulus, which is particularly important for multisensory and sensorimotor integration as well as for maintenance of stable visual percepts. Here, by manipulating subjects' gaze direction to control spatial and retinal locations of stimuli independently, we found that, when the stimulated retinal regions were held constant, the improvement with training in motion-direction discrimination of two successively displayed stimuli was restricted to the relative spatial position of the stimuli but independent of their absolute locations in head- and world-centered frame. These findings indicate location specificity of perceptual learning beyond retinotopic frame of reference, suggesting a pliable spatiotopic mechanism that can be specifically shaped by experience for better spatiotemporal integration of the learned stimuli.
Collapse
|