1
|
Tani K, Uehara S, Tanaka S. Psychophysical evidence for the involvement of head/body-centered reference frames in egocentric visuospatial memory: A whole-body roll tilt paradigm. J Vis 2023; 23:16. [PMID: 36689216 PMCID: PMC9900457 DOI: 10.1167/jov.23.1.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 12/29/2022] [Indexed: 01/24/2023] Open
Abstract
Accurate memory regarding the location of an object with respect to one's own body, termed egocentric visuospatial memory, is essential for action directed toward the object. Although researchers have suggested that the brain stores information related to egocentric visuospatial memory not only in the eye-centered reference frame but also in the other egocentric (i.e., head- or body-centered or both) reference frames, experimental evidence is scarce. Here, we tested this possibility by exploiting the perceptual distortion of head/body-centered coordinates via whole-body tilt relative to gravity. We hypothesized that if the head/body-centered reference frames are involved in storing the egocentric representation of a target in memory, then reproduction would be affected by this perceptual distortion. In two experiments, we asked participants to reproduce the remembered location of a visual target relative to their head/body. Using intervening whole-body roll rotations, we manipulated the initial (target presentation) and final (reproduction of the remembered location) body orientations in space and evaluated the effect on the reproduced location. Our results showed significant biases of the reproduced target location and perceived head/body longitudinal axis in the direction of the intervening body rotation. Importantly, the amount of error was correlated across participants. These results provide experimental evidence for the neural encoding and storage of information related to egocentric visuospatial memory in the head/body-centered reference frames.
Collapse
Affiliation(s)
- Keisuke Tani
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
- Faculty of Psychology, Otemon Gakuin University, Osaka, Japan
| | - Shintaro Uehara
- Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan
| | - Satoshi Tanaka
- Laboratory of Psychology, Hamamatsu University School of Medicine, Shizuoka, Japan
| |
Collapse
|
2
|
Kreyenmeier P, Fooken J, Spering M. Context effects on smooth pursuit and manual interception of a disappearing target. J Neurophysiol 2017; 118:404-415. [PMID: 28515287 DOI: 10.1152/jn.00217.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 04/25/2017] [Accepted: 05/12/2017] [Indexed: 11/22/2022] Open
Abstract
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points.NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuro-Cognitive Psychology, Ludwig Maximilian University, Munich, Germany
| | - Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; .,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Information, Computing and Cognitive Systems, University of British Columbia, Vancouver, Canada; and.,International Collaboration on Repair Discoveries, Vancouver, Canada
| |
Collapse
|
3
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
4
|
Thompson AA, Byrne PA, Henriques DYP. Visual targets aren't irreversibly converted to motor coordinates: eye-centered updating of visuospatial memory in online reach control. PLoS One 2014; 9:e92455. [PMID: 24643008 PMCID: PMC3958509 DOI: 10.1371/journal.pone.0092455] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2013] [Accepted: 02/21/2014] [Indexed: 01/19/2023] Open
Abstract
Counter to current and widely accepted hypotheses that sensorimotor transformations involve converting target locations in spatial memory from an eye-fixed reference frame into a more stable motor-based reference frame, we show that this is not strictly the case. Eye-centered representations continue to dominate reach control even during movement execution; the eye-centered target representation persists after conversion to a motor-based frame and is continuously updated as the eyes move during reach, and is used to modify the reach plan accordingly during online control. While reaches are known to be adjusted online when targets physically shift, our results are the first to show that similar adjustments occur in response to changes in representations of remembered target locations. Specifically, we find that shifts in gaze direction, which produce predictable changes in the internal (specifically eye-centered) representation of remembered target locations also produce mid-transport changes in reach kinematics. This indicates that representations of remembered reach targets (and visuospatial memory in general) continue to be updated relative to gaze even after reach onset. Thus, online motor control is influenced dynamically by both the external and internal updating mechanisms.
Collapse
Affiliation(s)
- Aidan A Thompson
- Centre for Vision Research, York University, Toronto, Ontario, Canada; School of Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| | - Patrick A Byrne
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Denise Y P Henriques
- Centre for Vision Research, York University, Toronto, Ontario, Canada; School of Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
5
|
Schütz I, Henriques DYP, Fiehler K. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Res 2013; 87:46-52. [PMID: 23770521 DOI: 10.1016/j.visres.2013.06.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 05/29/2013] [Accepted: 06/01/2013] [Indexed: 11/16/2022]
Abstract
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.
Collapse
Affiliation(s)
- I Schütz
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
| | | | | |
Collapse
|
6
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
7
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
8
|
Medendorp WP. Spatial constancy mechanisms in motor control. Philos Trans R Soc Lond B Biol Sci 2011; 366:476-91. [PMID: 21242137 DOI: 10.1098/rstb.2010.0089] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
Collapse
Affiliation(s)
- W Pieter Medendorp
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, PO Box 9104, NL-6500 HE Nijmegen, The Netherlands.
| |
Collapse
|
9
|
Thompson AA, Henriques DY. The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vision Res 2011; 51:819-26. [DOI: 10.1016/j.visres.2011.01.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2010] [Revised: 12/15/2010] [Accepted: 01/10/2011] [Indexed: 10/18/2022]
|
10
|
Locations of serial reach targets are coded in multiple reference frames. Vision Res 2010; 50:2651-60. [DOI: 10.1016/j.visres.2010.09.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2010] [Revised: 09/08/2010] [Accepted: 09/09/2010] [Indexed: 11/22/2022]
|
11
|
Jones SAH, Henriques DYP. Memory for proprioceptive and multisensory targets is partially coded relative to gaze. Neuropsychologia 2010; 48:3782-92. [PMID: 20934442 DOI: 10.1016/j.neuropsychologia.2010.10.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Revised: 09/21/2010] [Accepted: 10/01/2010] [Indexed: 11/25/2022]
Abstract
We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm.
Collapse
|
12
|
Interaction between gaze and visual and proprioceptive position judgements. Exp Brain Res 2010; 203:485-98. [DOI: 10.1007/s00221-010-2251-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2010] [Accepted: 04/08/2010] [Indexed: 10/19/2022]
|