1
|
Baltaretu BR, Stevens WD, Freud E, Crawford JD. Occipital and parietal cortex participate in a cortical network for transsaccadic discrimination of object shape and orientation. Sci Rep 2023; 13:11628. [PMID: 37468709 DOI: 10.1038/s41598-023-38554-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023] Open
Abstract
Saccades change eye position and interrupt vision several times per second, necessitating neural mechanisms for continuous perception of object identity, orientation, and location. Neuroimaging studies suggest that occipital and parietal cortex play complementary roles for transsaccadic perception of intrinsic versus extrinsic spatial properties, e.g., dorsomedial occipital cortex (cuneus) is sensitive to changes in spatial frequency, whereas the supramarginal gyrus (SMG) is modulated by changes in object orientation. Based on this, we hypothesized that both structures would be recruited to simultaneously monitor object identity and orientation across saccades. To test this, we merged two previous neuroimaging protocols: 21 participants viewed a 2D object and then, after sustained fixation or a saccade, judged whether the shape or orientation of the re-presented object changed. We, then, performed a bilateral region-of-interest analysis on identified cuneus and SMG sites. As hypothesized, cuneus showed both saccade and feature (i.e., object orientation vs. shape change) modulations, and right SMG showed saccade-feature interactions. Further, the cuneus activity time course correlated with several other cortical saccade/visual areas, suggesting a 'functional network' for feature discrimination. These results confirm the involvement of occipital/parietal cortex in transsaccadic vision and support complementary roles in spatial versus identity updating.
Collapse
Affiliation(s)
- B R Baltaretu
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - W Dale Stevens
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - E Freud
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - J D Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
- School of Kinesiology and Health Sciences, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
2
|
Blouin J, Pialasse JP, Mouchnino L, Simoneau M. On the Dynamics of Spatial Updating. Front Neurosci 2022; 16:780027. [PMID: 35250442 PMCID: PMC8893203 DOI: 10.3389/fnins.2022.780027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/24/2022] [Indexed: 11/23/2022] Open
Abstract
Most of our knowledge on the human neural bases of spatial updating comes from functional magnetic resonance imaging (fMRI) studies in which recumbent participants moved in virtual environments. As a result, little is known about the dynamic of spatial updating during real body motion. Here, we exploited the high temporal resolution of electroencephalography (EEG) to investigate the dynamics of cortical activation in a spatial updating task where participants had to remember their initial orientation while they were passively rotated about their vertical axis in the dark. After the rotations, the participants pointed toward their initial orientation. We contrasted the EEG signals with those recorded in a control condition in which participants had no cognitive task to perform during body rotations. We found that the amplitude of the P1N1 complex of the rotation-evoked potential (RotEPs) (recorded over the vertex) was significantly greater in the Updating task. The analyses of the cortical current in the source space revealed that the main significant task-related cortical activities started during the N1P2 interval (136–303 ms after rotation onset). They were essentially localized in the temporal and frontal (supplementary motor complex, dorsolateral prefrontal cortex, anterior prefrontal cortex) regions. During this time-window, the right superior posterior parietal cortex (PPC) also showed significant task-related activities. The increased activation of the PPC became bilateral over the P2N2 component (303–470 ms after rotation onset). In this late interval, the cuneus and precuneus started to show significant task-related activities. Together, the present results are consistent with the general scheme that the first task-related cortical activities during spatial updating are related to the encoding of spatial goals and to the storing of spatial information in working memory. These activities would precede those involved in higher order processes also relevant for updating body orientation during rotations linked to the egocentric and visual representations of the environment.
Collapse
Affiliation(s)
- Jean Blouin
- Laboratoire de Neurosciences Cognitives, CNRS, Aix-Marseille Université, Marseille, France
- *Correspondence: Jean Blouin,
| | | | - Laurence Mouchnino
- Laboratoire de Neurosciences Cognitives, CNRS, Aix-Marseille Université, Marseille, France
- Institut Universitaire de France, Paris, France
| | - Martin Simoneau
- Département de Kinésiologie, Faculté de Médecine, Université Laval, Québec, QC, Canada
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale du CIUSSS de la Capitale-Nationale, Québec, QC, Canada
| |
Collapse
|
3
|
Abstract
SIGNIFICANCE After a 30-year gap, several studies on head and eye movements and gaze tracking in baseball batting have been performed in the last decade. These baseball studies may lead to training protocols for batting. Here we review these studies and compare the tracking behaviors with those in other sports.Baseball batters are often instructed to "keep your eye on the ball." Until recently, the evidence regarding whether batters follow this instruction and if there are benefits to following this instruction was limited. Baseball batting studies demonstrate that batters tend to move the head more than the eyes in the direction of the ball at least until a saccade occurs. Foveal gaze tracking is often maintained on the ball through the early portion of the pitch, so it can be said that baseball batters do keep the eyes on the ball. While batters place gaze at or near the point of bat-ball contact, the way this is accomplished varies. In some studies, foveal gaze tracking continues late in the pitch trajectory, whereas in other studies, anticipatory saccades occur. The relative advantages of these discrepant gaze strategies on perceptual processing and motor planning speed and accuracy are discussed, and other variables that may influence anticipatory saccades including the predictability of the pitch and the level of batter expertise are described. Further studies involving larger groups with different levels of expertise under game conditions are required to determine which gaze tracking strategies are most beneficial for baseball batting.
Collapse
|
4
|
Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision. Sci Rep 2021; 11:11121. [PMID: 34045485 PMCID: PMC8160142 DOI: 10.1038/s41598-021-86996-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 03/23/2021] [Indexed: 11/08/2022] Open
Abstract
The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.
Collapse
|
5
|
Occipital cortex is modulated by transsaccadic changes in spatial frequency: an fMRI study. Sci Rep 2021; 11:8611. [PMID: 33883578 PMCID: PMC8060420 DOI: 10.1038/s41598-021-87506-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 03/24/2021] [Indexed: 11/15/2022] Open
Abstract
Previous neuroimaging studies have shown that inferior parietal and ventral occipital cortex are involved in the transsaccadic processing of visual object orientation. Here, we investigated whether the same areas are also involved in transsaccadic processing of a different feature, namely, spatial frequency. We employed a functional magnetic resonance imaging paradigm where participants briefly viewed a grating stimulus with a specific spatial frequency that later reappeared with the same or different frequency, after a saccade or continuous fixation. First, using a whole-brain Saccade > Fixation contrast, we localized two frontal (left precentral sulcus and right medial superior frontal gyrus), four parietal (bilateral superior parietal lobule and precuneus), and four occipital (bilateral cuneus and lingual gyri) regions. Whereas the frontoparietal sites showed task specificity, the occipital sites were also modulated in a saccade control task. Only occipital cortex showed transsaccadic feature modulations, with significant repetition enhancement in right cuneus. These observations (parietal task specificity, occipital enhancement, right lateralization) are consistent with previous transsaccadic studies. However, the specific regions differed (ventrolateral for orientation, dorsomedial for spatial frequency). Overall, this study supports a general role for occipital and parietal cortex in transsaccadic vision, with a specific role for cuneus in spatial frequency processing.
Collapse
|
6
|
Delaux A, de Saint Aubert JB, Ramanoël S, Bécu M, Gehrke L, Klug M, Chavarriaga R, Sahel JA, Gramann K, Arleo A. Mobile brain/body imaging of landmark-based navigation with high-density EEG. Eur J Neurosci 2021; 54:8256-8282. [PMID: 33738880 PMCID: PMC9291975 DOI: 10.1111/ejn.15190] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 03/05/2021] [Accepted: 03/14/2021] [Indexed: 01/07/2023]
Abstract
Coupling behavioral measures and brain imaging in naturalistic, ecological conditions is key to comprehend the neural bases of spatial navigation. This highly integrative function encompasses sensorimotor, cognitive, and executive processes that jointly mediate active exploration and spatial learning. However, most neuroimaging approaches in humans are based on static, motion‐constrained paradigms and they do not account for all these processes, in particular multisensory integration. Following the Mobile Brain/Body Imaging approach, we aimed to explore the cortical correlates of landmark‐based navigation in actively behaving young adults, solving a Y‐maze task in immersive virtual reality. EEG analysis identified a set of brain areas matching state‐of‐the‐art brain imaging literature of landmark‐based navigation. Spatial behavior in mobile conditions additionally involved sensorimotor areas related to motor execution and proprioception usually overlooked in static fMRI paradigms. Expectedly, we located a cortical source in or near the posterior cingulate, in line with the engagement of the retrosplenial complex in spatial reorientation. Consistent with its role in visuo‐spatial processing and coding, we observed an alpha‐power desynchronization while participants gathered visual information. We also hypothesized behavior‐dependent modulations of the cortical signal during navigation. Despite finding few differences between the encoding and retrieval phases of the task, we identified transient time–frequency patterns attributed, for instance, to attentional demand, as reflected in the alpha/gamma range, or memory workload in the delta/theta range. We confirmed that combining mobile high‐density EEG and biometric measures can help unravel the brain structures and the neural modulations subtending ecological landmark‐based navigation.
Collapse
Affiliation(s)
- Alexandre Delaux
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | | | - Stephen Ramanoël
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Marcia Bécu
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Lukas Gehrke
- Institute of Psychology and Ergonomics, Technische Universität Berlin, Berlin, Germany
| | - Marius Klug
- Institute of Psychology and Ergonomics, Technische Universität Berlin, Berlin, Germany
| | - Ricardo Chavarriaga
- Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,Zurich University of Applied Sciences, ZHAW Datalab, Winterthur, Switzerland
| | - José-Alain Sahel
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,CHNO des Quinze-Vingts, INSERM-DGOS CIC 1423, Paris, France.,Fondation Ophtalmologique Rothschild, Paris, France.,Department of Ophthalmology, The University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Klaus Gramann
- Institute of Psychology and Ergonomics, Technische Universität Berlin, Berlin, Germany
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
7
|
Abstract
A number of notions in the fields of motor control and kinesthetic perception have been used without clear definitions. In this review, we consider definitions for efference copy, percept, and sense of effort based on recent studies within the physical approach, which assumes that the neural control of movement is based on principles of parametric control and involves defining time-varying profiles of spatial referent coordinates for the effectors. The apparent redundancy in both motor and perceptual processes is reconsidered based on the principle of abundance. Abundance of efferent and afferent signals is viewed as the means of stabilizing both salient action characteristics and salient percepts formalized as stable manifolds in high-dimensional spaces of relevant elemental variables. This theoretical scheme has led recently to a number of novel predictions and findings. These include, in particular, lower accuracy in perception of variables produced by elements involved in a multielement task compared with the same elements in single-element tasks, dissociation between motor and perceptual effects of muscle coactivation, force illusions induced by muscle vibration, and errors in perception of unintentional drifts in performance. Taken together, these results suggest that participation of efferent signals in perception frequently involves distorted copies of actual neural commands, particularly those to antagonist muscles. Sense of effort is associated with such distorted efferent signals. Distortions in efference copy happen spontaneously and can also be caused by changes in sensory signals, e.g., those produced by muscle vibration.
Collapse
Affiliation(s)
- Mark L Latash
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania
| |
Collapse
|
8
|
Parietal Cortex Integrates Saccade and Object Orientation Signals to Update Grasp Plans. J Neurosci 2020; 40:4525-4535. [PMID: 32354854 DOI: 10.1523/jneurosci.0300-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 11/21/2022] Open
Abstract
Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.
Collapse
|
9
|
Blouin J, Saradjian AH, Pialasse JP, Manson GA, Mouchnino L, Simoneau M. Two Neural Circuits to Point Towards Home Position After Passive Body Displacements. Front Neural Circuits 2019; 13:70. [PMID: 31736717 PMCID: PMC6831616 DOI: 10.3389/fncir.2019.00070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/15/2019] [Indexed: 12/02/2022] Open
Abstract
A challenge in motor control research is to understand the mechanisms underlying the transformation of sensory information into arm motor commands. Here, we investigated these transformation mechanisms for movements whose targets were defined by information issued from body rotations in the dark (i.e., idiothetic information). Immediately after being rotated, participants reproduced the amplitude of their perceived rotation using their arm (Experiment 1). The cortical activation during movement planning was analyzed using electroencephalography and source analyses. Task-related activities were found in regions of interest (ROIs) located in the prefrontal cortex (PFC), dorsal premotor cortex, dorsal region of the anterior cingulate cortex (ACC) and the sensorimotor cortex. Importantly, critical regions for the cognitive encoding of space did not show significant task-related activities. These results suggest that arm movements were planned using a sensorimotor-type of spatial representation. However, when a 8 s delay was introduced between body rotation and the arm movement (Experiment 2), we found that areas involved in the cognitive encoding of space [e.g., ventral premotor cortex (vPM), rostral ACC, inferior and superior posterior parietal cortex (PPC)] showed task-related activities. Overall, our results suggest that the use of a cognitive-type of representation for planning arm movement after body motion is necessary when relevant spatial information must be stored before triggering the movement.
Collapse
Affiliation(s)
- Jean Blouin
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Anahid H Saradjian
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | | | - Gerome A Manson
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France.,Centre for Motor Control, University of Toronto, Toronto, ON, Canada
| | - Laurence Mouchnino
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Martin Simoneau
- Faculté de Médecine, Département de Kinésiologie, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| |
Collapse
|
10
|
Mackrous I, Carriot J, Simoneau M. Learning to use vestibular sense for spatial updating is context dependent. Sci Rep 2019; 9:11154. [PMID: 31371770 PMCID: PMC6671975 DOI: 10.1038/s41598-019-47675-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 07/22/2019] [Indexed: 11/09/2022] Open
Abstract
As we move, perceptual stability is crucial to successfully interact with our environment. Notably, the brain must update the locations of objects in space using extra-retinal signals. The vestibular system is a strong candidate as a source of information for spatial updating as it senses head motion. The ability to use this cue is not innate but must be learned. To date, the mechanisms of vestibular spatial updating generalization are unknown or at least controversial. In this paper we examine generalization patterns within and between different conditions of vestibular spatial updating. Participants were asked to update the position of a remembered target following (offline) or during (online) passive body rotation. After being trained on a single spatial target position within a given task, we tested generalization of performance for different spatial targets and an unpracticed spatial updating task. The results demonstrated different patterns of generalization across the workspace depending on the task. Further, no transfer was observed from the practiced to the unpracticed task. We found that the type of mechanism involved during learning governs generalization. These findings provide new knowledge about how the brain uses vestibular information to preserve its spatial updating ability.
Collapse
Affiliation(s)
| | - Jérôme Carriot
- Department of Physiology, McGill University, Montreal, QC, Canada
| | - Martin Simoneau
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada. .,Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada.
| |
Collapse
|
11
|
Paraskevoudi N, Pezaris JS. Eye Movement Compensation and Spatial Updating in Visual Prosthetics: Mechanisms, Limitations and Future Directions. Front Syst Neurosci 2019; 12:73. [PMID: 30774585 PMCID: PMC6368147 DOI: 10.3389/fnsys.2018.00073] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/21/2018] [Indexed: 01/01/2023] Open
Abstract
Despite appearing automatic and effortless, perceiving the visual world is a highly complex process that depends on intact visual and oculomotor function. Understanding the mechanisms underlying spatial updating (i.e., gaze contingency) represents an important, yet unresolved issue in the fields of visual perception and cognitive neuroscience. Many questions regarding the processes involved in updating visual information as a function of the movements of the eyes are still open for research. Beyond its importance for basic research, gaze contingency represents a challenge for visual prosthetics as well. While most artificial vision studies acknowledge its importance in providing accurate visual percepts to the blind implanted patients, the majority of the current devices do not compensate for gaze position. To-date, artificial percepts to the blind population have been provided either by intraocular light-sensing circuitry or by using external cameras. While the former commonly accounts for gaze shifts, the latter requires the use of eye-tracking or similar technology in order to deliver percepts based on gaze position. Inspired by the need to overcome the hurdle of gaze contingency in artificial vision, we aim to provide a thorough overview of the research addressing the neural underpinnings of eye compensation, as well as its relevance in visual prosthetics. The present review outlines what is currently known about the mechanisms underlying spatial updating and reviews the attempts of current visual prosthetic devices to overcome the hurdle of gaze contingency. We discuss the limitations of the current devices and highlight the need to use eye-tracking methodology in order to introduce gaze-contingent information to visual prosthetics.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Brainlab – Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - John S. Pezaris
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States
- Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
12
|
Pierce J, Saj A. A critical review of the role of impaired spatial remapping processes in spatial neglect. Clin Neuropsychol 2018; 33:948-970. [DOI: 10.1080/13854046.2018.1503722] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Affiliation(s)
- Jordan Pierce
- Department of Neurosciences, University of Geneva, Geneva, Switzerland
| | - Arnaud Saj
- Department of Neurosciences, University of Geneva, Geneva, Switzerland
- Department of Neurology, University Hospital of Geneva, Geneva, Switzerland
| |
Collapse
|
13
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
14
|
Nau M, Navarro Schröder T, Bellmund JLS, Doeller CF. Hexadirectional coding of visual space in human entorhinal cortex. Nat Neurosci 2018; 21:188-190. [PMID: 29311746 DOI: 10.1038/s41593-017-0050-8] [Citation(s) in RCA: 94] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 11/22/2017] [Indexed: 01/15/2023]
Abstract
Entorhinal grid cells map the local environment, but their involvement beyond spatial navigation remains elusive. We examined human functional MRI responses during a highly controlled visual tracking task and show that entorhinal cortex exhibited a sixfold rotationally symmetric signal encoding gaze direction. Our results provide evidence for a grid-like entorhinal code for visual space and suggest a more general role of the entorhinal grid system in coding information along continuous dimensions.
Collapse
Affiliation(s)
- Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway. .,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Tobias Navarro Schröder
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Jacob L S Bellmund
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Christian F Doeller
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Norwegian University of Science and Technology, Trondheim, Norway. .,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands. .,St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway.
| |
Collapse
|
15
|
Titchener SA, Shivdasani MN, Fallon JB, Petoe MA. Gaze Compensation as a Technique for Improving Hand-Eye Coordination in Prosthetic Vision. Transl Vis Sci Technol 2018; 7:2. [PMID: 29321945 PMCID: PMC5759363 DOI: 10.1167/tvst.7.1.2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 11/07/2017] [Indexed: 11/24/2022] Open
Abstract
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision.
Collapse
Affiliation(s)
- Samuel A Titchener
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Mohit N Shivdasani
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - James B Fallon
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Matthew A Petoe
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| |
Collapse
|
16
|
Medendorp WP, de Brouwer AJ, Smeets JB. Dynamic representations of visual space for perception and action. Cortex 2018; 98:194-202. [DOI: 10.1016/j.cortex.2016.11.013] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Revised: 09/06/2016] [Accepted: 11/17/2016] [Indexed: 11/17/2022]
|
17
|
Gutteling TP, Schutter DJLG, Medendorp WP. Alpha-band transcranial alternating current stimulation modulates precision, but not gain during whole-body spatial updating. Neuropsychologia 2017; 106:52-59. [PMID: 28888892 DOI: 10.1016/j.neuropsychologia.2017.09.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 08/31/2017] [Accepted: 09/04/2017] [Indexed: 11/30/2022]
Abstract
Spatial updating is essential to maintain an accurate representation of our visual environment when we move. A neural mechanism that contributes to this ability is called remapping: the transfer of visual information from neural populations that code a location before the motion to those that encode it after the motion. While there is ample evidence for neural remapping in conjunction with eye movements, only recent findings suggest a role of this mechanism for whole-body motion updating, based on the observation that alpha band (10Hz) activity is selectively reorganized during remapping. This study tested whether alpha oscillations directly contribute to whole-body motion updating using transcranial alternating current stimulation (tACS). In a double blind sham controlled design, healthy volunteers received 10Hz tACS at an intensity of 1mA over either the left or right posterior hemisphere during a whole-body motion updating task. Updating performance was assessed psychometrically and indices of gain and precision were obtained. No tACS-related effects on updating gain were found, irrespective of whether the remapping was across or within the hemispheres. In contrast, updating precision was enhanced when a target representation had to be internally remapped to the stimulated hemisphere, but not in other remapping conditions. Our observations suggest that alpha band oscillations do not directly affect the transfer of target representations during remapping, but increase the fidelity of the updated representation by attenuating interference of afferent information.
Collapse
Affiliation(s)
- T P Gutteling
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| | - D J L G Schutter
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| | - W P Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| |
Collapse
|
18
|
Dugué GP, Tihy M, Gourévitch B, Léna C. Cerebellar re-encoding of self-generated head movements. eLife 2017; 6:e26179. [PMID: 28608779 PMCID: PMC5489315 DOI: 10.7554/elife.26179] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Accepted: 06/09/2017] [Indexed: 02/01/2023] Open
Abstract
Head movements are primarily sensed in a reference frame tied to the head, yet they are used to calculate self-orientation relative to the world. This requires to re-encode head kinematic signals into a reference frame anchored to earth-centered landmarks such as gravity, through computations whose neuronal substrate remains to be determined. Here, we studied the encoding of self-generated head movements in the rat caudal cerebellar vermis, an area essential for graviceptive functions. We found that, contrarily to peripheral vestibular inputs, most Purkinje cells exhibited a mixed sensitivity to head rotational and gravitational information and were differentially modulated by active and passive movements. In a subpopulation of cells, this mixed sensitivity underlay a tuning to rotations about an axis defined relative to gravity. Therefore, we show that the caudal vermis hosts a re-encoded, gravitationally polarized representation of self-generated head kinematics in freely moving rats.
Collapse
Affiliation(s)
- Guillaume P Dugué
- Neurophysiology of Brain Circuits Team, Institut de Biologie de l'École Normale Supérieure, Inserm U1024, CNRS UMR8197, École Normale Supérieure, PSL Research University, Paris, France
| | - Matthieu Tihy
- Neurophysiology of Brain Circuits Team, Institut de Biologie de l'École Normale Supérieure, Inserm U1024, CNRS UMR8197, École Normale Supérieure, PSL Research University, Paris, France
| | - Boris Gourévitch
- Genetics and Physiology of Hearing Laboratory, Inserm UMR1120, University Paris 6, Institut Pasteur, Paris, France
| | - Clément Léna
- Neurophysiology of Brain Circuits Team, Institut de Biologie de l'École Normale Supérieure, Inserm U1024, CNRS UMR8197, École Normale Supérieure, PSL Research University, Paris, France
| |
Collapse
|
19
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
20
|
Genzel D, Firzlaff U, Wiegrebe L, MacNeilage PR. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals. J Neurophysiol 2016; 116:765-75. [PMID: 27169504 DOI: 10.1152/jn.00052.2016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 05/09/2016] [Indexed: 11/22/2022] Open
Abstract
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating.
Collapse
Affiliation(s)
- Daria Genzel
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Uwe Firzlaff
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany; and
| | - Lutz Wiegrebe
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Paul R MacNeilage
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Deutsches Schwindel- und Gleichgewichtszentrum, University Hospital of Munich, Munich, Germany
| |
Collapse
|
21
|
Avoidance of a moving threat in the common chameleon (Chamaeleo chamaeleon): rapid tracking by body motion and eye use. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2016; 202:567-76. [PMID: 27343128 DOI: 10.1007/s00359-016-1106-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2015] [Revised: 06/08/2016] [Accepted: 06/15/2016] [Indexed: 10/21/2022]
Abstract
A chameleon (Chamaeleo chamaeleon) on a perch responds to a nearby threat by moving to the side of the perch opposite the threat, while bilaterally compressing its abdomen, thus minimizing its exposure to the threat. If the threat moves, the chameleon pivots around the perch to maintain its hidden position. How precise is the body rotation and what are the patterns of eye movement during avoidance? Just-hatched chameleons, placed on a vertical perch, on the side roughly opposite to a visual threat, adjusted their position to precisely opposite the threat. If the threat were moved on a horizontal arc at angular velocities of up to 85°/s, the chameleons co-rotated smoothly so that (1) the angle of the sagittal plane of the head relative to the threat and (2) the direction of monocular gaze, were positively and significantly correlated with threat angular position. Eye movements were role-dependent: the eye toward which the threat moved maintained a stable gaze on it, while the contralateral eye scanned the surroundings. This is the first description, to our knowledge, of such a response in a non-flying terrestrial vertebrate, and it is discussed in terms of possible underlying control systems.
Collapse
|
22
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
23
|
Gutteling TP, Medendorp WP. Role of Alpha-Band Oscillations in Spatial Updating across Whole Body Motion. Front Psychol 2016; 7:671. [PMID: 27199882 PMCID: PMC4858599 DOI: 10.3389/fpsyg.2016.00671] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 04/22/2016] [Indexed: 11/30/2022] Open
Abstract
When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electroencephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects’ behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body.
Collapse
Affiliation(s)
- Tjerk P Gutteling
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands
| | - W P Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands
| |
Collapse
|
24
|
Dash S, Nazari SA, Yan X, Wang H, Crawford JD. Superior Colliculus Responses to Attended, Unattended, and Remembered Saccade Targets during Smooth Pursuit Eye Movements. Front Syst Neurosci 2016; 10:34. [PMID: 27147987 PMCID: PMC4828430 DOI: 10.3389/fnsys.2016.00034] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/30/2016] [Indexed: 11/16/2022] Open
Abstract
In realistic environments, keeping track of multiple visual targets during eye movements likely involves an interaction between vision, top-down spatial attention, memory, and self-motion information. Recently we found that the superior colliculus (SC) visual memory response is attention-sensitive and continuously updated relative to gaze direction. In that study, animals were trained to remember the location of a saccade target across an intervening smooth pursuit (SP) eye movement (Dash et al., 2015). Here, we modified this paradigm to directly compare the properties of visual and memory updating responses to attended and unattended targets. Our analysis shows that during SP, active SC visual vs. memory updating responses share similar gaze-centered spatio-temporal profiles (suggesting a common mechanism), but updating was weaker by ~25%, delayed by ~55 ms, and far more dependent on attention. Further, during SP the sum of passive visual responses (to distracter stimuli) and memory updating responses (to saccade targets) closely resembled the responses for active attentional tracking of visible saccade targets. These results suggest that SP updating signals provide a damped, delayed estimate of attended location that contributes to the gaze-centered tracking of both remembered and visible saccade targets.
Collapse
Affiliation(s)
- Suryadeep Dash
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | | | - Xiaogang Yan
- Center for Vision Research, York University Toronto, ON, Canada
| | - Hongying Wang
- Center for Vision Research, York University Toronto, ON, Canada
| | - J Douglas Crawford
- Center for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, Biology and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
25
|
de Brouwer AJ, Smeets JB, Gutteling TP, Toni I, Medendorp WP. The Müller-Lyer illusion affects visuomotor updating in the dorsal visual stream. Neuropsychologia 2015; 77:119-27. [DOI: 10.1016/j.neuropsychologia.2015.08.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Revised: 07/23/2015] [Accepted: 08/10/2015] [Indexed: 10/23/2022]
|
26
|
Mackrous I, Simoneau M. Improving spatial updating accuracy in absence of external feedback. Neuroscience 2015; 300:155-62. [PMID: 25987200 DOI: 10.1016/j.neuroscience.2015.05.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 04/23/2015] [Accepted: 05/11/2015] [Indexed: 10/23/2022]
Abstract
Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning.
Collapse
Affiliation(s)
- I Mackrous
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada
| | - M Simoneau
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada.
| |
Collapse
|
27
|
Transsaccadic processing: stability, integration, and the potential role of remapping. Atten Percept Psychophys 2015; 77:3-27. [PMID: 25380979 DOI: 10.3758/s13414-014-0751-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While our frequent saccades allow us to sample the complex visual environment in a highly efficient manner, they also raise certain challenges for interpreting and acting upon visual input. In the present, selective review, we discuss key findings from the domains of cognitive psychology, visual perception, and neuroscience concerning two such challenges: (1) maintaining the phenomenal experience of visual stability despite our rapidly shifting gaze, and (2) integrating visual information across discrete fixations. In the first two sections of the article, we focus primarily on behavioral findings. Next, we examine the possibility that a neural phenomenon known as predictive remapping may provide an explanation for aspects of transsaccadic processing. In this section of the article, we delineate and critically evaluate multiple proposals about the potential role of predictive remapping in light of both theoretical principles and empirical findings.
Collapse
|
28
|
Stanford T. Vision: A Moving Hill for Spatial Updating on the Fly. Curr Biol 2015; 25:R115-R117. [DOI: 10.1016/j.cub.2014.12.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
29
|
Abstract
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction.
Collapse
|
30
|
Continuous updating of visuospatial memory in superior colliculus during slow eye movements. Curr Biol 2015; 25:267-274. [PMID: 25601549 DOI: 10.1016/j.cub.2014.11.064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2013] [Revised: 10/15/2014] [Accepted: 11/25/2014] [Indexed: 11/23/2022]
Abstract
BACKGROUND Primates can remember and spatially update the visual direction of previously viewed objects during various types of self-motion. It is known that the brain "remaps" visual memory traces relative to gaze just before and after, but not during, discrete gaze shifts called saccades. However, it is not known how visual memory is updated during slow, continuous motion of the eyes. RESULTS Here, we recorded the midbrain superior colliculus (SC) of two rhesus monkeys that were trained to spatially update the location of a saccade target across an intervening smooth pursuit (SP) eye movement. Saccade target location was varied across trials so that it passed through the neuron's receptive field at different points of the SP trajectory. Nearly all (99% of) visual responsive neurons, but no motor neurons, showed a transient memory response that continuously updated the saccade goal during SP. These responses were gaze centered (i.e., shifting across the SC's retinotopic map in opposition to gaze). Furthermore, this response was strongly enhanced by attention and/or saccade target selection. CONCLUSIONS This is the first demonstration of continuous updating of visual memory responses during eye motion. We expect that this would generalize to other visuomotor structures when gaze shifts in a continuous, unpredictable fashion.
Collapse
|
31
|
The use of Argus® II retinal prosthesis by blind subjects to achieve localisation and prehension of objects in 3-dimensional space. Graefes Arch Clin Exp Ophthalmol 2014; 253:1907-14. [PMID: 25547618 DOI: 10.1007/s00417-014-2912-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2014] [Revised: 11/13/2014] [Accepted: 11/19/2014] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND The Argus® II retinal prosthesis system has entered mainstream treatment for patients blind from Retinitis Pigmentosa (RP). We set out to evaluate the use of this system by blind subjects to achieve object localisation and prehension in 3-dimensional space. METHODS This is a single-centre, prospective, internally-controlled case series involving 5 blind RP subjects who received the Argus® II implant. The subjects were instructed to visually locate, reach and grasp (i.e. prehension) a small white cuboid object placed at random locations on a black worktop. A flashing LED beacon was attached to the reaching index finger (as a finger marker) to assess the effect of enhanced finger visualisation on performance. Tasks were performed with the prosthesis switched "on" or "off" and with the finger marker switched "on" or "off". Forty-eight trials were performed per subject. Trajectory of each subject's hand movement during the task was recorded by a 3D motion-capture unit (Qualysis®, see supplementary video) and analysed using a MATLAB script. RESULT Percentage of successful prehension±standard deviation was: 71.3 ± 27.1 % with prosthesis on and finger marker on; 77.5 ± 24.5 % with prosthesis on and finger marker off; 0.0 ± 0.0 % with prosthesis off and finger marker on, and 0.00 ± 0.00 % with prosthesis off and finger marker off. The finger marker did not have a significant effect on performance (P = 0.546 and 1, Wilcoxon Signed Rank test, with prosthesis on and off respectively). With prosthesis off, none of the subjects were able to visually locate the target object and no initiation of prehension was attempted. With prosthesis on, prehension was initiated on 82.5 % (range 59-100 %) of the trials with 89.0 % (range 66.7-100 %) achieving successful prehension. CONCLUSION Argus® II subjects were able to achieve object localisation and prehension better with their prosthesis switched on than off.
Collapse
|
32
|
Gutteling TP, Selen LPJ, Medendorp WP. Parallax-sensitive remapping of visual space in occipito-parietal alpha-band activity during whole-body motion. J Neurophysiol 2014; 113:1574-84. [PMID: 25505108 DOI: 10.1152/jn.00477.2014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Despite the constantly changing retinal image due to eye, head, and body movements, we are able to maintain a stable representation of the visual environment. Various studies on retinal image shifts caused by saccades have suggested that occipital and parietal areas correct for these perturbations by a gaze-centered remapping of the neural image. However, such a uniform, rotational, remapping mechanism cannot work during translations when objects shift on the retina in a more complex, depth-dependent fashion due to motion parallax. Here we tested whether the brain's activity patterns show parallax-sensitive remapping of remembered visual space during whole-body motion. Under continuous recording of electroencephalography (EEG), we passively translated human subjects while they had to remember the location of a world-fixed visual target, briefly presented in front of or behind the eyes' fixation point prior to the motion. Using a psychometric approach we assessed the quality of the memory update, which had to be made based on vestibular feedback and other extraretinal motion cues. All subjects showed a variable amount of parallax-sensitive updating errors, i.e., the direction of the errors depended on the depth of the target relative to fixation. The EEG recordings show a neural correlate of this parallax-sensitive remapping in the alpha-band power at occipito-parietal electrodes. At parietal electrodes, the strength of these alpha-band modulations correlated significantly with updating performance. These results suggest that alpha-band oscillatory activity reflects the time-varying updating of gaze-centered spatial information during parallax-sensitive remapping during whole-body motion.
Collapse
Affiliation(s)
- T P Gutteling
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - L P J Selen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - W P Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
33
|
Tanaka LL, Dessing JC, Malik P, Prime SL, Crawford JD. The effects of TMS over dorsolateral prefrontal cortex on trans-saccadic memory of multiple objects. Neuropsychologia 2014; 63:185-93. [PMID: 25192630 DOI: 10.1016/j.neuropsychologia.2014.08.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2014] [Revised: 07/04/2014] [Accepted: 08/20/2014] [Indexed: 10/24/2022]
Abstract
Humans typically make several rapid eye movements (saccades) per second. It is thought that visual working memory can retain and spatially integrate three to four objects or features across each saccade but little is known about this neural mechanism. Previously we showed that transcranial magnetic stimulation (TMS) to the posterior parietal cortex and frontal eye fields degrade trans-saccadic memory of multiple object features (Prime, Vesia, & Crawford, 2008, Journal of Neuroscience, 28(27), 6938-6949; Prime, Vesia, & Crawford, 2010, Cerebral Cortex, 20(4), 759-772.). Here, we used a similar protocol to investigate whether dorsolateral prefrontal cortex (DLPFC), an area involved in spatial working memory, is also involved in trans-saccadic memory. Subjects were required to report changes in stimulus orientation with (saccade task) or without (fixation task) an eye movement in the intervening memory interval. We applied single-pulse TMS to left and right DLPFC during the memory delay, timed at three intervals to arrive approximately 100 ms before, 100 ms after, or at saccade onset. In the fixation task, left DLPFC TMS produced inconsistent results, whereas right DLPFC TMS disrupted performance at all three intervals (significantly for presaccadic TMS). In contrast, in the saccade task, TMS consistently facilitated performance (significantly for left DLPFC/perisaccadic TMS and right DLPFC/postsaccadic TMS) suggesting a dis-inhibition of trans-saccadic processing. These results are consistent with a neural circuit of trans-saccadic memory that overlaps and interacts with, but is partially separate from the circuit for visual working memory during sustained fixation.
Collapse
Affiliation(s)
- L L Tanaka
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| | - J C Dessing
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; School of Psychology, Queen׳s University Belfast, Northern Ireland
| | - P Malik
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| | - S L Prime
- Department of Psychology, University of Saskatchewan, Canada
| | - J D Crawford
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
34
|
Moreau-Debord I, Martin CZ, Landry M, Green AM. Evidence for a reference frame transformation of vestibular signal contributions to voluntary reaching. J Neurophysiol 2014; 111:1903-19. [DOI: 10.1152/jn.00419.2013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To contribute appropriately to voluntary reaching during body motion, vestibular signals must be transformed from a head-centered to a body-centered reference frame. We quantitatively investigated the evidence for this transformation during online reach execution by using galvanic vestibular stimulation (GVS) to simulate rotation about a head-fixed, roughly naso-occipital axis as human subjects made planar reaching movements to a remembered location with their head in different orientations. If vestibular signals that contribute to reach execution have been transformed from a head-centered to a body-centered reference frame, the same stimulation should be interpreted as body tilt with the head upright but as vertical-axis rotation with the head inclined forward. Consequently, GVS should perturb reach trajectories in a head-orientation-dependent way. Consistent with this prediction, GVS applied during reach execution induced trajectory deviations that were significantly larger with the head forward compared with upright. Only with the head forward were trajectories consistently deviated in opposite directions for rightward versus leftward simulated rotation, as appropriate to compensate for body vertical-axis rotation. These results demonstrate that vestibular signals contributing to online reach execution have indeed been transformed from a head-centered to a body-centered reference frame. Reach deviation amplitudes were comparable to those predicted for ideal compensation for body rotation using a biomechanical limb model. Finally, by comparing the effects of application of GVS during reach execution versus prior to reach onset we also provide evidence that spatially transformed vestibular signals contribute to at least partially distinct compensation mechanisms for body motion during reach planning versus execution.
Collapse
Affiliation(s)
- Ian Moreau-Debord
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | | | - Marianne Landry
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| | - Andrea M. Green
- Département de Neurosciences, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
35
|
Ziesche A, Hamker FH. Brain circuits underlying visual stability across eye movements-converging evidence for a neuro-computational model of area LIP. Front Comput Neurosci 2014; 8:25. [PMID: 24653691 PMCID: PMC3949326 DOI: 10.3389/fncom.2014.00025] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Accepted: 02/14/2014] [Indexed: 11/13/2022] Open
Abstract
The understanding of the subjective experience of a visually stable world despite the occurrence of an observer's eye movements has been the focus of extensive research for over 20 years. These studies have revealed fundamental mechanisms such as anticipatory receptive field (RF) shifts and the saccadic suppression of stimulus displacements, yet there currently exists no single explanatory framework for these observations. We show that a previously presented neuro-computational model of peri-saccadic mislocalization accounts for the phenomenon of predictive remapping and for the observation of saccadic suppression of displacement (SSD). This converging evidence allows us to identify the potential ingredients of perceptual stability that generalize beyond different data sets in a formal physiology-based model. In particular we propose that predictive remapping stabilizes the visual world across saccades by introducing a feedback loop and, as an emergent result, small displacements of stimuli are not noticed by the visual system. The model provides a link from neural dynamics, to neural mechanism and finally to behavior, and thus offers a testable comprehensive framework of visual stability.
Collapse
Affiliation(s)
- Arnold Ziesche
- Artificial Intelligence, Computer Science, Chemnitz University of Technology Chemnitz, Germany ; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster Muenster, Germany
| | - Fred H Hamker
- Artificial Intelligence, Computer Science, Chemnitz University of Technology Chemnitz, Germany
| |
Collapse
|
36
|
Can representational trajectory reveal the nature of an internal model of gravity? Atten Percept Psychophys 2014; 76:1106-20. [PMID: 24470258 DOI: 10.3758/s13414-014-0626-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The memory for the vanishing location of a horizontally moving target is usually displaced forward in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, this downward displacement has been shown to increase with time (representational trajectory). However, the degree to which different kinematic events change the temporal profile of these displacements remains to be determined. The present article attempts to fill this gap. In the first experiment, we replicate the finding that representational momentum for downward-moving targets is bigger than for upward motions, showing, moreover, that it increases rapidly during the first 300 ms, stabilizing afterward. This temporal profile, but not the increased error for descending targets, is shown to be disrupted when eye movements are not allowed. In the second experiment, we show that the downward drift with time emerges even for static targets. Finally, in the third experiment, we report an increased error for upward-moving targets, as compared with downward movements, when the display is compatible with a downward ego-motion by including vection cues. Thus, the errors in the direction of gravity are compatible with the perceived event and do not merely reflect a retinotopic bias. Overall, these results provide further evidence for an internal model of gravity in the visual representational system.
Collapse
|
37
|
Abstract
Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body: in other words, a stable representation of space. Recent functional MRI studies have provided strong evidence that this egocentric representation has a location in the precuneus, on the medial surface of the superior parietal cortex. This is a region previously identified with 'self-centred mental imagery', so it seems likely that the stable egocentric representation, required by the motor system, is also the source of our conscious percept of a stable world.
Collapse
Affiliation(s)
- Michael F Land
- School of Life Sciences, University of Sussex, , Brighton BN1 9QG, UK
| |
Collapse
|
38
|
Chen X, Deangelis GC, Angelaki DE. Diverse spatial reference frames of vestibular signals in parietal cortex. Neuron 2013; 80:1310-21. [PMID: 24239126 DOI: 10.1016/j.neuron.2013.09.006] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2013] [Indexed: 10/26/2022]
Abstract
Reference frames are important for understanding how sensory cues from different modalities are coordinated to guide behavior, and the parietal cortex is critical to these functions. We compare reference frames of vestibular self-motion signals in the ventral intraparietal area (VIP), parietoinsular vestibular cortex (PIVC), and dorsal medial superior temporal area (MSTd). Vestibular heading tuning in VIP is invariant to changes in both eye and head positions, indicating a body (or world)-centered reference frame. Vestibular signals in PIVC have reference frames that are intermediate between head and body centered. In contrast, MSTd neurons show reference frames between head and eye centered but not body centered. Eye and head position gain fields were strongest in MSTd and weakest in PIVC. Our findings reveal distinct spatial reference frames for representing vestibular signals and pose new challenges for understanding the respective roles of these areas in potentially diverse vestibular functions.
Collapse
Affiliation(s)
- Xiaodong Chen
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | | | | |
Collapse
|
39
|
Proske U, Gandevia SC. The proprioceptive senses: their roles in signaling body shape, body position and movement, and muscle force. Physiol Rev 2013; 92:1651-97. [PMID: 23073629 DOI: 10.1152/physrev.00048.2011] [Citation(s) in RCA: 992] [Impact Index Per Article: 90.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
This is a review of the proprioceptive senses generated as a result of our own actions. They include the senses of position and movement of our limbs and trunk, the sense of effort, the sense of force, and the sense of heaviness. Receptors involved in proprioception are located in skin, muscles, and joints. Information about limb position and movement is not generated by individual receptors, but by populations of afferents. Afferent signals generated during a movement are processed to code for endpoint position of a limb. The afferent input is referred to a central body map to determine the location of the limbs in space. Experimental phantom limbs, produced by blocking peripheral nerves, have shown that motor areas in the brain are able to generate conscious sensations of limb displacement and movement in the absence of any sensory input. In the normal limb tendon organs and possibly also muscle spindles contribute to the senses of force and heaviness. Exercise can disturb proprioception, and this has implications for musculoskeletal injuries. Proprioceptive senses, particularly of limb position and movement, deteriorate with age and are associated with an increased risk of falls in the elderly. The more recent information available on proprioception has given a better understanding of the mechanisms underlying these senses as well as providing new insight into a range of clinical conditions.
Collapse
Affiliation(s)
- Uwe Proske
- Department of Physiology, Monash University, Victoria, Australia.
| | | |
Collapse
|
40
|
Shin S, Sommer MA. Division of labor in frontal eye field neurons during presaccadic remapping of visual receptive fields. J Neurophysiol 2012; 108:2144-59. [PMID: 22815407 DOI: 10.1152/jn.00204.2012] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Our percept of visual stability across saccadic eye movements may be mediated by presaccadic remapping. Just before a saccade, neurons that remap become visually responsive at a future field (FF), which anticipates the saccade vector. Hence, the neurons use corollary discharge of saccades. Many of the neurons also decrease their response at the receptive field (RF). Presaccadic remapping occurs in several brain areas including the frontal eye field (FEF), which receives corollary discharge of saccades in its layer IV from a collicular-thalamic pathway. We studied, at two levels, the microcircuitry of remapping in the FEF. At the laminar level, we compared remapping between layers IV and V. At the cellular level, we compared remapping between different neuron types of layer IV. In the FEF in four monkeys (Macaca mulatta), we identified 27 layer IV neurons with orthodromic stimulation and 57 layer V neurons with antidromic stimulation from the superior colliculus. With the use of established criteria, we classified the layer IV neurons as putative excitatory (n = 11), putative inhibitory (n = 12), or ambiguous (n = 4). We found that just before a saccade, putative excitatory neurons increased their visual response at the RF, putative inhibitory neurons showed no change, and ambiguous neurons increased their visual response at the FF. None of the neurons showed presaccadic visual changes at both RF and FF. In contrast, neurons in layer V showed full remapping (at both the RF and FF). Our data suggest that elemental signals for remapping are distributed across neuron types in early cortical processing and combined in later stages of cortical microcircuitry.
Collapse
Affiliation(s)
- Sooyoon Shin
- Department of Neuroscience, Center for the Neural Basis of Cognition, and Center for Neuroscience at the University of Pittsburgh, University of Pittsburgh, Pittsburgh, PA, USA
| | | |
Collapse
|
41
|
Abstract
Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.
Collapse
|
42
|
Tanaka M, Kunimatsu J. Contribution of the central thalamus to the generation of volitional saccades. Eur J Neurosci 2011; 33:2046-57. [PMID: 21645100 DOI: 10.1111/j.1460-9568.2011.07699.x] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Lesions in the motor thalamus can cause deficits in somatic movements. However, the involvement of the thalamus in the generation of eye movements has only recently been elucidated. In this article, we review recent advances into the role of the thalamus in eye movements. Anatomically, the anterior group of the intralaminar nuclei and paralaminar portion of the ventrolateral, ventroanterior and mediodorsal nuclei of the thalamus send massive projections to the frontal eye field and supplementary eye field. In addition, these parts of the thalamus, collectively known as the 'oculomotor thalamus', receive inputs from the cerebellum, the basal ganglia and virtually all stages of the saccade-generating pathways in the brainstem. In their pioneering work in the 1980s, Schlag and Schlag-Rey found a variety of eye movement-related neurons in the oculomotor thalamus, and proposed that this region might constitute a 'central controller' playing a role in monitoring eye movements and generating self-paced saccades. This hypothesis has been evaluated by recent experiments in non-human primates and by clinical observations of subjects with thalamic lesions. In addition, several recent studies have also addressed the involvement of the oculomotor thalamus in the generation of anti-saccades and the selection of targets for saccades. These studies have revealed the impact of subcortical signals on the higher-order cortical processing underlying saccades, and suggest the possibility of future studies using the oculomotor system as a model to explore the neural mechanisms of global cortico-subcortical loops and the neural basis of a local network between the thalamus and cortex.
Collapse
Affiliation(s)
- Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo 060-8638, Japan.
| | | |
Collapse
|
43
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
44
|
Zrenner E, Bartz-Schmidt KU, Benav H, Besch D, Bruckmann A, Gabel VP, Gekeler F, Greppmaier U, Harscher A, Kibbel S, Koch J, Kusnyerik A, Peters T, Stingl K, Sachs H, Stett A, Szurman P, Wilhelm B, Wilke R. Subretinal electronic chips allow blind patients to read letters and combine them to words. Proc Biol Sci 2011; 278:1489-97. [PMID: 21047851 PMCID: PMC3081743 DOI: 10.1098/rspb.2010.1747] [Citation(s) in RCA: 625] [Impact Index Per Article: 48.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Accepted: 10/13/2010] [Indexed: 11/12/2022] Open
Abstract
A light-sensitive, externally powered microchip was surgically implanted subretinally near the macular region of volunteers blind from hereditary retinal dystrophy. The implant contains an array of 1500 active microphotodiodes ('chip'), each with its own amplifier and local stimulation electrode. At the implant's tip, another array of 16 wire-connected electrodes allows light-independent direct stimulation and testing of the neuron-electrode interface. Visual scenes are projected naturally through the eye's lens onto the chip under the transparent retina. The chip generates a corresponding pattern of 38 × 40 pixels, each releasing light-intensity-dependent electric stimulation pulses. Subsequently, three previously blind persons could locate bright objects on a dark table, two of whom could discern grating patterns. One of these patients was able to correctly describe and name objects like a fork or knife on a table, geometric patterns, different kinds of fruit and discern shades of grey with only 15 per cent contrast. Without a training period, the regained visual functions enabled him to localize and approach persons in a room freely and to read large letters as complete words after several years of blindness. These results demonstrate for the first time that subretinal micro-electrode arrays with 1500 photodiodes can create detailed meaningful visual perception in previously blind individuals.
Collapse
Affiliation(s)
- Eberhart Zrenner
- Centre for Ophthalmology, University of Tübingen, Schleichstr. 12, 72076 Tübingen, Germany.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Thompson AA, Henriques DY. The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vision Res 2011; 51:819-26. [DOI: 10.1016/j.visres.2011.01.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2010] [Revised: 12/15/2010] [Accepted: 01/10/2011] [Indexed: 10/18/2022]
|
46
|
|
47
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
48
|
Quaia C, Joiner WM, Fitzgibbon EJ, Optican LM, Smith MA. Eye movement sequence generation in humans: Motor or goal updating? J Vis 2010; 10:28. [PMID: 21191134 PMCID: PMC3610575 DOI: 10.1167/10.14.28] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Saccadic eye movements are often grouped in pre-programmed sequences. The mechanism underlying the generation of each saccade in a sequence is currently poorly understood. Broadly speaking, two alternative schemes are possible: first, after each saccade the retinotopic location of the next target could be estimated, and an appropriate saccade could be generated. We call this the goal updating hypothesis. Alternatively, multiple motor plans could be pre-computed, and they could then be updated after each movement. We call this the motor updating hypothesis. We used McLaughlin's intra-saccadic step paradigm to artificially create a condition under which these two hypotheses make discriminable predictions. We found that in human subjects, when sequences of two saccades are planned, the motor updating hypothesis predicts the landing position of the second saccade in two-saccade sequences much better than the goal updating hypothesis. This finding suggests that the human saccadic system is capable of executing sequences of saccades to multiple targets by planning multiple motor commands, which are then updated by serial subtraction of ongoing motor output.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, USA.
| | | | | | | | | |
Collapse
|
49
|
The missing link for attention pointers: comment on Cavanagh et al. Trends Cogn Sci 2010; 14:473; author reply 474-5. [DOI: 10.1016/j.tics.2010.08.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Accepted: 08/30/2010] [Indexed: 11/17/2022]
|
50
|
Parks NA, Corballis PM. Human transsaccadic visual processing: Presaccadic remapping and postsaccadic updating. Neuropsychologia 2010; 48:3451-8. [DOI: 10.1016/j.neuropsychologia.2010.07.028] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2010] [Revised: 07/16/2010] [Accepted: 07/19/2010] [Indexed: 11/16/2022]
|