1
|
Tani K, Uehara S, Tanaka S. Psychophysical evidence for the involvement of head/body-centered reference frames in egocentric visuospatial memory: A whole-body roll tilt paradigm. J Vis 2023; 23:16. [PMID: 36689216 PMCID: PMC9900457 DOI: 10.1167/jov.23.1.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
Accurate memory regarding the location of an object with respect to one's own body, termed egocentric visuospatial memory, is essential for action directed toward the object. Although researchers have suggested that the brain stores information related to egocentric visuospatial memory not only in the eye-centered reference frame but also in the other egocentric (i.e., head- or body-centered or both) reference frames, experimental evidence is scarce. Here, we tested this possibility by exploiting the perceptual distortion of head/body-centered coordinates via whole-body tilt relative to gravity. We hypothesized that if the head/body-centered reference frames are involved in storing the egocentric representation of a target in memory, then reproduction would be affected by this perceptual distortion. In two experiments, we asked participants to reproduce the remembered location of a visual target relative to their head/body. Using intervening whole-body roll rotations, we manipulated the initial (target presentation) and final (reproduction of the remembered location) body orientations in space and evaluated the effect on the reproduced location. Our results showed significant biases of the reproduced target location and perceived head/body longitudinal axis in the direction of the intervening body rotation. Importantly, the amount of error was correlated across participants. These results provide experimental evidence for the neural encoding and storage of information related to egocentric visuospatial memory in the head/body-centered reference frames.
Collapse
Affiliation(s)
- Keisuke Tani
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan,Faculty of Psychology, Otemon Gakuin University, Osaka, Japan,
| | - Shintaro Uehara
- Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan,
| | - Satoshi Tanaka
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan,
| |
Collapse
|
2
|
Bernard-Espina J, Dal Canto D, Beraneck M, McIntyre J, Tagliabue M. How Tilting the Head Interferes With Eye-Hand Coordination: The Role of Gravity in Visuo-Proprioceptive, Cross-Modal Sensory Transformations. Front Integr Neurosci 2022; 16:788905. [PMID: 35359704 PMCID: PMC8961421 DOI: 10.3389/fnint.2022.788905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
To correctly position the hand with respect to the spatial location and orientation of an object to be reached/grasped, visual information about the target and proprioceptive information from the hand must be compared. Since visual and proprioceptive sensory modalities are inherently encoded in a retinal and musculo-skeletal reference frame, respectively, this comparison requires cross-modal sensory transformations. Previous studies have shown that lateral tilts of the head interfere with the visuo-proprioceptive transformations. It is unclear, however, whether this phenomenon is related to the neck flexion or to the head-gravity misalignment. To answer to this question, we performed three virtual reality experiments in which we compared a grasping-like movement with lateral neck flexions executed in an upright seated position and while lying supine. In the main experiment, the task requires cross-modal transformations, because the target information is visually acquired, and the hand is sensed through proprioception only. In the other two control experiments, the task is unimodal, because both target and hand are sensed through one, and the same, sensory channel (vision and proprioception, respectively), and, hence, cross-modal processing is unnecessary. The results show that lateral neck flexions have considerably different effects in the seated and supine posture, but only for the cross-modal task. More precisely, the subjects’ response variability and the importance associated to the visual encoding of the information significantly increased when supine. We show that these findings are consistent with the idea that head-gravity misalignment interferes with the visuo-proprioceptive cross-modal processing. Indeed, the principle of statistical optimality in multisensory integration predicts the observed results if the noise associated to the visuo-proprioceptive transformations is assumed to be affected by gravitational signals, and not by neck proprioceptive signals per se. This finding is also consistent with the observation of otolithic projections in the posterior parietal cortex, which is involved in the visuo-proprioceptive processing. Altogether these findings represent a clear evidence of the theorized central role of gravity in spatial perception. More precisely, otolithic signals would contribute to reciprocally align the reference frames in which the available sensory information can be encoded.
Collapse
Affiliation(s)
- Jules Bernard-Espina
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Daniele Dal Canto
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Mathieu Beraneck
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Joseph McIntyre
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- Ikerbasque Science Foundation, Bilbao, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), San Sebastian, Spain
| | - Michele Tagliabue
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- *Correspondence: Michele Tagliabue,
| |
Collapse
|
3
|
La Scaleia B, Lacquaniti F, Zago M. Body orientation contributes to modelling the effects of gravity for target interception in humans. J Physiol 2019; 597:2021-2043. [PMID: 30644996 DOI: 10.1113/jp277469] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 01/09/2019] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS It is known that interception of targets accelerated by gravity involves internal models coupled with visual signals. Non-visual signals related to head and body orientation relative to gravity may also contribute, although their role is poorly understood. In a novel experiment, we asked pitched observers to hit a virtual target approaching with an acceleration that was either coherent or incoherent with their pitch-tilt. Initially, the timing errors were large and independent of the coherence between target acceleration and observer's pitch. With practice, however, the timing errors became substantially smaller in the coherent conditions. The results show that information about head and body orientation can contribute to modelling the effects of gravity on a moving target. Orientation cues from vestibular and somatosensory signals might be integrated with visual signals in the vestibular cortex, where the internal model of gravity is assumed to be encoded. ABSTRACT Interception of moving targets relies on visual signals and internal models. Less is known about the additional contribution of non-visual cues about head and body orientation relative to gravity. We took advantage of Galileo's law of motion along an incline to demonstrate the effects of vestibular and somatosensory cues about head and body orientation on interception timing. Participants were asked to hit a ball rolling in a gutter towards the eyes, resulting in image expansion. The scene was presented in a head-mounted display, without any visual information about gravity direction. In separate blocks of trials participants were pitched backwards by 20° or 60°, whereas ball acceleration was randomized across trials so as to be compatible with rolling down a slope of 20° or 60°. Initially, the timing errors were large, independently of the coherence between ball acceleration and pitch angle, consistent with responses based exclusively on visual information because visual stimuli were identical at both tilts. At the end of the experiment, however, the timing errors were systematically smaller in the coherent conditions than the incoherent ones. Moreover, the responses were significantly (P = 0.007) earlier when participants were pitched by 60° than when they were pitched by 20°. Therefore, practice with the task led to incorporation of information about head and body orientation relative to gravity for response timing. Instead, posture did not affect response timing in a control experiment in which participants hit a static target in synchrony with the last of a predictable series of stationary audiovisual stimuli.
Collapse
Affiliation(s)
- Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy.,Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Civil Engineering and Computer Science Engineering, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
4
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
5
|
Gaveau J, Berret B, Angelaki DE, Papaxanthis C. Direction-dependent arm kinematics reveal optimal integration of gravity cues. eLife 2016; 5. [PMID: 27805566 PMCID: PMC5117856 DOI: 10.7554/elife.16394] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2016] [Accepted: 11/01/2016] [Indexed: 12/31/2022] Open
Abstract
The brain has evolved an internal model of gravity to cope with life in the Earth's gravitational environment. How this internal model benefits the implementation of skilled movement has remained unsolved. One prevailing theory has assumed that this internal model is used to compensate for gravity's mechanical effects on the body, such as to maintain invariant motor trajectories. Alternatively, gravity force could be used purposely and efficiently for the planning and execution of voluntary movements, thereby resulting in direction-depending kinematics. Here we experimentally interrogate these two hypotheses by measuring arm kinematics while varying movement direction in normal and zero-G gravity conditions. By comparing experimental results with model predictions, we show that the brain uses the internal model to implement control policies that take advantage of gravity to minimize movement effort.
Collapse
Affiliation(s)
- Jeremie Gaveau
- Université Bourgogne Franche-Comté, INSERM CAPS UMR 1093, Dijon, France
| | - Bastien Berret
- CIAMS, Université Paris-Sud, Université Paris Saclay, Orsay, France.,CIAMS, Université d'Orléans, Orléans, France
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | | |
Collapse
|
6
|
Space physiology II: adaptation of the central nervous system to space flight—past, current, and future studies. Eur J Appl Physiol 2012; 113:1655-72. [DOI: 10.1007/s00421-012-2509-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022]
|
7
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
8
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
9
|
Angelaki DE, Klier EM, Snyder LH. A vestibular sensation: probabilistic approaches to spatial perception. Neuron 2009; 64:448-61. [PMID: 19945388 DOI: 10.1016/j.neuron.2009.11.010] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system helps maintain equilibrium and clear vision through reflexes, but it also contributes to spatial perception. In recent years, research in the vestibular field has expanded to higher-level processing involving the cortex. Vestibular contributions to spatial cognition have been difficult to study because the circuits involved are inherently multisensory. Computational methods and the application of Bayes theorem are used to form hypotheses about how information from different sensory modalities is combined together with expectations based on past experience in order to obtain optimal estimates of cognitive variables like current spatial orientation. To test these hypotheses, neuronal populations are being recorded during active tasks in which subjects make decisions based on vestibular and visual or somatosensory information. This review highlights what is currently known about the role of vestibular information in these processes, the computations necessary to obtain the appropriate signals, and the benefits that have emerged thus far.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
10
|
Klier EM, Angelaki DE. Spatial updating and the maintenance of visual constancy. Neuroscience 2008; 156:801-18. [PMID: 18786618 DOI: 10.1016/j.neuroscience.2008.07.079] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2008] [Revised: 07/29/2008] [Accepted: 07/30/2008] [Indexed: 11/16/2022]
Abstract
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.
Collapse
Affiliation(s)
- E M Klier
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|
11
|
Medendorp WP, Beurze SM, Van Pelt S, Van Der Werf J. Behavioral and cortical mechanisms for spatial coding and action planning. Cortex 2008; 44:587-97. [DOI: 10.1016/j.cortex.2007.06.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2007] [Revised: 06/04/2007] [Accepted: 06/26/2007] [Indexed: 11/29/2022]
|
12
|
Van Pelt S, Medendorp WP. Updating Target Distance Across Eye Movements in Depth. J Neurophysiol 2008; 99:2281-90. [DOI: 10.1152/jn.01281.2007] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements ( n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.
Collapse
|
13
|
Ruiz-Ruiz M, Martinez-Trujillo JC. Human updating of visual motion direction during head rotations. J Neurophysiol 2008; 99:2558-76. [PMID: 18337365 DOI: 10.1152/jn.00931.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.
Collapse
Affiliation(s)
- Mario Ruiz-Ruiz
- Cognitive Neurophysiology Laboratory, Department of Physiology, McGill University, Montreal, Quebec, Canada
| | | |
Collapse
|
14
|
Yakusheva TA, Shaikh AG, Green AM, Blazquez PM, Dickman JD, Angelaki DE. Purkinje cells in posterior cerebellar vermis encode motion in an inertial reference frame. Neuron 2007; 54:973-85. [PMID: 17582336 DOI: 10.1016/j.neuron.2007.06.003] [Citation(s) in RCA: 158] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2007] [Revised: 05/02/2007] [Accepted: 06/05/2007] [Indexed: 11/17/2022]
Abstract
The ability to orient and navigate through the terrestrial environment represents a computational challenge common to all vertebrates. It arises because motion sensors in the inner ear, the otolith organs, and the semicircular canals transduce self-motion in an egocentric reference frame. As a result, vestibular afferent information reaching the brain is inappropriate for coding our own motion and orientation relative to the outside world. Here we show that cerebellar cortical neuron activity in vermal lobules 9 and 10 reflects the critical computations of transforming head-centered vestibular afferent information into earth-referenced self-motion and spatial orientation signals. Unlike vestibular and deep cerebellar nuclei neurons, where a mixture of responses was observed, Purkinje cells represent a homogeneous population that encodes inertial motion. They carry the earth-horizontal component of a spatially transformed and temporally integrated rotation signal from the semicircular canals, which is critical for computing head attitude, thus isolating inertial linear accelerations during navigation.
Collapse
Affiliation(s)
- Tatyana A Yakusheva
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | | | |
Collapse
|
15
|
Klier EM, Angelaki DE, Hess BJM. Human visuospatial updating after noncommutative rotations. J Neurophysiol 2007; 98:537-44. [PMID: 17442766 DOI: 10.1152/jn.01229.2006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As we move our bodies in space, we often undergo head and body rotations about different axes-yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Dept of Neurobiology, Washington University School of Medicine, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
16
|
Glasauer S, Brandt T. Noncommutative updating of perceived self-orientation in three dimensions. J Neurophysiol 2007; 97:2958-64. [PMID: 17287442 DOI: 10.1152/jn.00655.2006] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
After whole body rotations around an earth-vertical axis in darkness, subjects can indicate their orientation in space with respect to their initial orientation reasonably well. This is possible because the brain is able to mathematically integrate self-velocity information provided by the vestibular system to obtain self-orientation, a process called path integration. For rotations around multiple axes, however, computations are more demanding to accurately update self-orientation with respect to space. In such a case, simple integration is no longer sufficient because of the noncommutativity of rotations. We investigated whether such updating is possible after three-dimensional whole body rotations and whether the noncommutativity of three-dimensional rotations is taken into account. The ability of ten subjects to indicate their spatial orientation in the earth-horizontal plane was tested after different rotational paths from upright to supine positions. Initial and final orientations of the subjects were the same in all cases, but the paths taken were different, and so were the angular velocities sensed by the vestibular system. The results show that seven of the ten subjects could consistently indicate their final orientation within the earth-horizontal plane. Thus perceived final orientation was independent of the path taken, i.e., the noncommutativity of rotations was taken into account.
Collapse
Affiliation(s)
- Stefan Glasauer
- Department of Neurology with Center for Sensorimotor Research, Klinikum Grosshadern, Ludwig-Maximilians University, Munich, Germany.
| | | |
Collapse
|
17
|
Van Pelt S, Medendorp WP. Gaze-Centered Updating of Remembered Visual Space During Active Whole-Body Translations. J Neurophysiol 2007; 97:1209-20. [PMID: 17135474 DOI: 10.1152/jn.00882.2006] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes’ fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion.
Collapse
Affiliation(s)
- Stan Van Pelt
- Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, NL-6500 HE Nijmegen, The Netherlands.
| | | |
Collapse
|
18
|
Ventre-Dominey J, Vallee B. Vestibular integration in human cerebral cortex contributes to spatial remapping. Neuropsychologia 2007; 45:435-9. [PMID: 16959278 DOI: 10.1016/j.neuropsychologia.2006.06.031] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2006] [Revised: 05/30/2006] [Accepted: 06/23/2006] [Indexed: 11/19/2022]
Abstract
The process of visuo-spatial updating is crucial in guiding human behaviour. While the parietal cortex has long been considered a principal candidate for performing spatial transformations, the exact underlying mechanisms are still unclear. In this study, we investigated in a patient with a right occipito-parietal lesion the ability to update the visual space during vestibularly guided saccades. To quantify the possible deficits in visual and vestibular memory processes, we studied the subject's performance in two separate memory tasks, visual (VIS) and vestibular (VES). In the VIS task, a saccade was elicited from a central fixation point to the location of a visual memorized target and in the VEST task, the saccade was elicited after whole-body rotation to the starting position thus compensating for the rotation. Finally, in an updating task (UPD), the subject had to memorize the position of a visual target then after a whole-body rotation he had to produce a saccade to the remembered visual target location in space. Our main findings was a significant hypometria in the final eye position of both VEST and UPD saccades induced during rotation to the left (contralesional) hemispace as compared to saccades induced after right (ipsilesional) rotation. Moreover, these deficits in vestibularly guided saccades correlated with deficits in vestibulo-ocular time constant, reflecting disorders in the inertial vestibular integration path. We conclude that the occipito-parietal cortex in man can provide a first stage in visuo-spatial remapping by encoding inertial head position signals during gaze orientation.
Collapse
|
19
|
Wei M, Li N, Newlands SD, Dickman JD, Angelaki DE. Deficits and Recovery in Visuospatial Memory During Head Motion After Bilateral Labyrinthine Lesion. J Neurophysiol 2006; 96:1676-82. [PMID: 16760354 DOI: 10.1152/jn.00012.2006] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To keep a stable internal representation of the environment as we move, extraretinal sensory or motor cues are critical for updating neural maps of visual space. Using a memory-saccade task, we studied whether visuospatial updating uses vestibular information. Specifically, we tested whether trained rhesus monkeys maintain the ability to update the conjugate and vergence components of memory-guided eye movements in response to passive translational or rotational head and body movements after bilateral labyrinthine lesion. We found that lesioned animals were acutely compromised in generating the appropriate horizontal versional responses necessary to update the directional goal of memory-guided eye movements after leftward or rightward rotation/translation. This compromised function recovered in the long term, likely using extravestibular (e.g., somatosensory) signals, such that nearly normal performance was observed 4 mo after the lesion. Animals also lost their ability to adjust memory vergence to account for relative distance changes after motion in depth. Not only were these depth deficits larger than the respective effects on version, but they also showed little recovery. We conclude that intact labyrinthine signals are functionally useful for proper visuospatial memory updating during passive head and body movements.
Collapse
Affiliation(s)
- Min Wei
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | |
Collapse
|
20
|
Klier EM, Hess BJM, Angelaki DE. Differences in the Accuracy of Human Visuospatial Memory After Yaw and Roll Rotations. J Neurophysiol 2006; 95:2692-7. [PMID: 16371458 DOI: 10.1152/jn.01017.2005] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Our ability to keep track of objects in the environment, even as we move, has been attributed to various cues including efference copies, vestibular signals, proprioception, and gravitational cues. However, the presence of some cues, such as gravity, may not be used to the same extent by different axes of motion (e.g., yaw vs. roll). We tested whether changes in gravitational cues can be used to improve visuospatial updating performance for yaw rotations as previously shown for roll. We found differences in updating for yaw and roll rotations in that yaw updating is not only associated with larger systematic errors but is also not facilitated by gravity in the same way as roll updating.
Collapse
Affiliation(s)
- Eliana M Klier
- Department of Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
21
|
Kaptein RG, Van Gisbergen JAM. Canal and Otolith Contributions to Visual Orientation Constancy During Sinusoidal Roll Rotation. J Neurophysiol 2006; 95:1936-48. [PMID: 16319209 DOI: 10.1152/jn.00856.2005] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Using vestibular sensors to maintain visual stability during changes in head tilt, crucial when panoramic cues are not available, presents a computational challenge. Reliance on the otoliths requires a neural strategy for resolving their tilt/translation ambiguity, such as canal–otolith interaction or frequency segregation. The canal signal is subject to bandwidth limitations. In this study, we assessed the relative contribution of canal and otolith signals and investigated how they might be processed and combined. The experimental approach was to explore conditions with and without otolith contributions in a frequency range with various degrees of canal activation. We tested the perceptual stability of visual line orientation in six human subjects during passive sinusoidal roll tilt in the dark at frequencies from 0.05 to 0.4 Hz (30° peak to peak). Because subjects were constantly monitoring spatial motion of a visual line in the frontal plane, the paradigm required moment-to-moment updating for ongoing ego motion. Their task was to judge the total spatial sway of the line when it rotated sinusoidally at various amplitudes. From the responses we determined how the line had to be rotated to be perceived as stable in space. Tests were taken both with (subject upright) and without (subject supine) gravity cues. Analysis of these data showed that the compensation for body rotation in the computation of line orientation in space, although always incomplete, depended on vestibular rotation frequency and on the availability of gravity cues. In the supine condition, the compensation for ego motion showed a steep increase with frequency, compatible with an integrated canal signal. The improvement of performance in the upright condition, afforded by graviceptive cues from the otoliths, showed low-pass characteristics. Simulations showed that a linear combination of an integrated canal signal and a gravity-based signal can account for these results.
Collapse
Affiliation(s)
- Ronald G Kaptein
- Department of Biophysics, Radboud University Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands
| | | |
Collapse
|