1
|
Bharmauria V, Seo S, Crawford JD. Neural integration of egocentric and allocentric visual cues in the gaze system. J Neurophysiol 2025; 133:109-120. [PMID: 39584726 DOI: 10.1152/jn.00498.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/14/2024] [Accepted: 11/16/2024] [Indexed: 11/26/2024] Open
Abstract
A fundamental question in neuroscience is how the brain integrates egocentric (body-centered) and allocentric (landmark-centered) visual cues, but for many years this question was ignored in sensorimotor studies. This changed in recent behavioral experiments, but the underlying physiology of ego/allocentric integration remained largely unstudied. The specific goal of this review is to explain how prefrontal neurons integrate eye-centered and landmark-centered visual codes for optimal gaze behavior. First, we briefly review the whole brain/behavioral mechanisms for ego/allocentric integration in the human and summarize egocentric coding mechanisms in the primate gaze system. We then focus in more depth on cellular mechanisms for ego/allocentric coding in the frontal and supplementary eye fields. We first explain how prefrontal visual responses integrate eye-centered target and landmark codes to produce a transformation toward landmark-centered coordinates. Next, we describe what happens when a landmark shifts during the delay between seeing and acquiring a remembered target, initially resulting in independently coexisting ego/allocentric memory codes. We then describe how these codes are reintegrated in the motor burst for the gaze shift. Deep network simulations suggest that these properties emerge spontaneously for optimal gaze behavior. Finally, we synthesize these observations and relate them to normal brain function through a simplified conceptual model. Together, these results show that integration of visuospatial features continues well beyond visual cortex and suggest a general cellular mechanism for goal-directed visual behavior.
Collapse
Affiliation(s)
- Vishal Bharmauria
- The Tampa Human Neurophysiology Lab & Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| | - Serah Seo
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - J Douglas Crawford
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Maij F, Seegelke C, Medendorp WP, Heed T. External location of touch is constructed post-hoc based on limb choice. eLife 2020; 9:57804. [PMID: 32945257 PMCID: PMC7561349 DOI: 10.7554/elife.57804] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 09/18/2020] [Indexed: 11/13/2022] Open
Abstract
When humans indicate on which hand a tactile stimulus occurred, they often err when their hands are crossed. This finding seemingly supports the view that the automatically determined touch location in external space affects limb assignment: the crossed right hand is localized in left space, and this conflict presumably provokes hand assignment errors. Here, participants judged on which hand the first of two stimuli, presented during a bimanual movement, had occurred, and then indicated its external location by a reach-to-point movement. When participants incorrectly chose the hand stimulated second, they pointed to where that hand had been at the correct, first time point, though no stimulus had occurred at that location. This behavior suggests that stimulus localization depended on hand assignment, not vice versa. It is, thus, incompatible with the notion of automatic computation of external stimulus location upon occurrence. Instead, humans construct external touch location post-hoc and on demand.
Collapse
Affiliation(s)
- Femke Maij
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Christian Seegelke
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.,Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Tobias Heed
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.,Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
4
|
Parietal Cortex Integrates Saccade and Object Orientation Signals to Update Grasp Plans. J Neurosci 2020; 40:4525-4535. [PMID: 32354854 DOI: 10.1523/jneurosci.0300-20.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 11/21/2022] Open
Abstract
Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.
Collapse
|
5
|
Two types of memory-based (pantomime) reaches distinguished by gaze anchoring in reach-to-grasp tasks. Behav Brain Res 2020; 381:112438. [PMID: 31857149 DOI: 10.1016/j.bbr.2019.112438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/13/2019] [Accepted: 12/14/2019] [Indexed: 11/24/2022]
Abstract
Comparisons of target-based reaching vs memory-based (pantomime) reaching have been used to obtain insight into the visuomotor control of reaching. The present study examined the contribution of gaze anchoring, reaching to a target that is under continuous gaze, to both target-based and memory-based reaching. Participants made target-based reaches for discs located on a table or food items located on a pedestal or they replaced the objects. They then made memory-based reaches in which they pantomimed their target-based reaches. Participants were fitted with hand sensors for kinematic tracking and an eye tracker to monitor gaze. When making target-based reaches, participants directed gaze to the target location from reach onset to offset without interrupting saccades. Similar gaze anchoring was present for memory-based reaches when the surface upon which the target had been placed remained. When the target and its surface were both removed there was no systematic relationship between gaze and the reach. Gaze anchoring was also present when participants replaced a target on a surface, a movement featuring a reach but little grasp. That memory-based reaches can be either gaze anchor-associated or gaze anchor-independent is discussed in relation to contemporary views of the neural control of reaching.
Collapse
|
6
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
7
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
8
|
Comparing the effect of temporal delay on the availability of egocentric and allocentric information in visual search. Behav Brain Res 2017; 331:38-46. [PMID: 28526516 DOI: 10.1016/j.bbr.2017.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 05/05/2017] [Accepted: 05/06/2017] [Indexed: 11/22/2022]
Abstract
Frames of reference play a central role in perceiving an object's location and reaching to pick that object up. It is thought that the ventral stream, believed to subserve vision for perception, utilises allocentric coding, while the dorsal stream, argued to be responsible for vision for action, primarily uses an egocentric reference frame. We have previously shown that egocentric representations can survive a delay; however, it is possible that in comparison to allocentric information, egocentric information decays more rapidly. Here we directly compare the effect of delay on the availability of egocentric and allocentric representations. We used spatial priming in visual search and repeated the location of the target relative to either a landmark in the search array (allocentric condition) or the observer's body (egocentric condition). Three inter-trial intervals created minimum delays between two consecutive trials of 2, 4, or 8seconds. In both conditions, search times to primed locations were faster than search times to un-primed locations. In the egocentric condition the effects were driven by a reduction in search times when egocentric information was repeated, an effect that was observed at all three delays. In the allocentric condition while search times did not change when the allocentric information was repeated, search times to un-primed target locations became slower. We conclude that egocentric representations are not as transient as previously thought but instead this information is still available, and can influence behaviour, after lengthy periods of delay. We also discuss the possible origins of the differences between allocentric and egocentric priming effects.
Collapse
|
9
|
Gaze-centered coding of proprioceptive reach targets after effector movement: Testing the impact of online information, time of movement, and target distance. PLoS One 2017; 12:e0180782. [PMID: 28678886 PMCID: PMC5498052 DOI: 10.1371/journal.pone.0180782] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 06/21/2017] [Indexed: 11/19/2022] Open
Abstract
In previous research, we demonstrated that spatial coding of proprioceptive reach targets depends on the presence of an effector movement (Mueller & Fiehler, Neuropsychologia, 2014, 2016). In these studies, participants were asked to reach in darkness with their right hand to a proprioceptive target (tactile stimulation on the finger tip) while their gaze was varied. They either moved their left, stimulated hand towards a target location or kept it stationary at this location where they received a touch on the fingertip to which they reached with their right hand. When the stimulated hand was moved, reach errors varied as a function of gaze relative to target whereas reach errors were independent of gaze when the hand was kept stationary. The present study further examines whether (a) the availability of proprioceptive online information, i.e. reaching to an online versus a remembered target, (b) the time of the effector movement, i.e. before or after target presentation, or (c) the target distance from the body influences gaze-centered coding of proprioceptive reach targets. We found gaze-dependent reach errors in the conditions which included a movement of the stimulated hand irrespective of whether proprioceptive information was available online or remembered. This suggests that an effector movement leads to gaze-centered coding for both online and remembered proprioceptive reach targets. Moreover, moving the stimulated hand before or after target presentation did not affect gaze-dependent reach errors, thus, indicating a continuous spatial update of positional signals of the stimulated hand rather than the target location per se. However, reaching to a location close to the body rather than farther away (but still within reachable space) generally decreased the influence of a gaze-centered reference frame.
Collapse
|
10
|
Bosco A, Piserchia V, Fattori P. Multiple Coordinate Systems and Motor Strategies for Reaching Movements When Eye and Hand Are Dissociated in Depth and Direction. Front Hum Neurosci 2017; 11:323. [PMID: 28690504 PMCID: PMC5481402 DOI: 10.3389/fnhum.2017.00323] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
Reaching behavior represents one of the basic aspects of human cognitive abilities important for the interaction with the environment. Reaching movements towards visual objects are controlled by mechanisms based on coordinate systems that transform the spatial information of target location into appropriate motor response. Although recent works have extensively studied the encoding of target position for reaching in three-dimensional space at behavioral level, the combined analysis of reach errors and movement variability has so far been investigated by few studies. Here we did so by testing 12 healthy participants in an experiment where reaching targets were presented at different depths and directions in foveal and peripheral viewing conditions. Each participant executed a memory-guided task in which he/she had to reach the memorized position of the target. A combination of vector and gradient analysis, novel for behavioral data, was applied to analyze patterns of reach errors for different combinations of eye/target positions. The results showed reach error patterns based on both eye- and space-centered coordinate systems: in depth more biased towards a space-centered representation and in direction mixed between space- and eye-centered representation. We calculated movement variability to describe different trajectory strategies adopted by participants while reaching to the different eye/target configurations tested. In direction, the distribution of variability between configurations that shared the same eye/target relative configuration was different, whereas in configurations that shared the same spatial position of targets, it was similar. In depth, the variability showed more similar distributions in both pairs of eye/target configurations tested. These results suggest that reaching movements executed in geometries that require hand and eye dissociations in direction and depth showed multiple coordinate systems and different trajectory strategies according to eye/target configurations and the two dimensions of space.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| | - Valentina Piserchia
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of BolognaBologna, Italy
| |
Collapse
|
11
|
Klinghammer M, Blohm G, Fiehler K. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching. Front Neurosci 2017; 11:204. [PMID: 28450826 PMCID: PMC5390010 DOI: 10.3389/fnins.2017.00204] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 03/24/2017] [Indexed: 11/16/2022] Open
Abstract
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Collapse
Affiliation(s)
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig-UniversityGiessen, Germany
| |
Collapse
|
12
|
Klinghammer M, Schütz I, Blohm G, Fiehler K. Allocentric information is used for memory-guided reaching in depth: A virtual reality study. Vision Res 2016; 129:13-24. [PMID: 27789230 DOI: 10.1016/j.visres.2016.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues.
Collapse
Affiliation(s)
- Mathias Klinghammer
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Immo Schütz
- TU Chemnitz, Institut für Physik, Reichenhainer Str. 70, 09126 Chemnitz, Germany.
| | - Gunnar Blohm
- Queen's University, Centre for Neuroscience Studies, 18, Stuart Street, Kingston, Ontario K7L 3N6, Canada.
| | - Katja Fiehler
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| |
Collapse
|
13
|
Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement. Neuropsychologia 2016; 87:63-73. [PMID: 27157885 DOI: 10.1016/j.neuropsychologia.2016.04.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Revised: 04/11/2016] [Accepted: 04/28/2016] [Indexed: 11/21/2022]
Abstract
Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets.
Collapse
|
14
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
15
|
No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 2015; 233:1225-35. [PMID: 25600817 PMCID: PMC4355444 DOI: 10.1007/s00221-015-4197-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/05/2015] [Indexed: 11/19/2022]
Abstract
When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching.
Collapse
|
16
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
17
|
Hesse C, Ball K, Schenk T. Pointing in visual periphery: is DF's dorsal stream intact? PLoS One 2014; 9:e91420. [PMID: 24626162 PMCID: PMC3953402 DOI: 10.1371/journal.pone.0091420] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 02/11/2014] [Indexed: 12/02/2022] Open
Abstract
Observations of the visual form agnosic patient DF have been highly influential in establishing the hypothesis that separate processing streams deal with vision for perception (ventral stream) and vision for action (dorsal stream). In this context, DF's preserved ability to perform visually-guided actions has been contrasted with the selective impairment of visuomotor performance in optic ataxia patients suffering from damage to dorsal stream areas. However, the recent finding that DF shows a thinning of the grey matter in the dorsal stream regions of both hemispheres in combination with the observation that her right-handed movements are impaired when they are performed in visual periphery has opened up the possibility that patient DF may potentially also be suffering from optic ataxia. If lesions to the posterior parietal cortex (dorsal stream) are bilateral, pointing and reaching deficits should be observed in both visual hemifields and for both hands when targets are viewed in visual periphery. Here, we tested DF's visuomotor performance when pointing with her left and her right hand toward targets presented in the left and the right visual field at three different visual eccentricities. Our results indicate that DF shows large and consistent impairments in all conditions. These findings imply that DF's dorsal stream atrophies are functionally relevant and hence challenge the idea that patient DF's seemingly normal visuomotor behaviour can be attributed to her intact dorsal stream. Instead, DF seems to be a patient who suffers from combined ventral and dorsal stream damage meaning that a new account is needed to explain why she shows such remarkably normal visuomotor behaviour in a number of tasks and conditions.
Collapse
Affiliation(s)
- Constanze Hesse
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
- * E-mail:
| | - Keira Ball
- Department of Psychology, Durham University, Stockton-on-Tees, United Kingdom
| | - Thomas Schenk
- Neurology, University of Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
18
|
Schütz I, Henriques DYP, Fiehler K. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Res 2013; 87:46-52. [PMID: 23770521 DOI: 10.1016/j.visres.2013.06.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 05/29/2013] [Accepted: 06/01/2013] [Indexed: 11/16/2022]
Abstract
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.
Collapse
Affiliation(s)
- I Schütz
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
| | | | | |
Collapse
|
19
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
20
|
|
21
|
Thompson AA, Henriques DY. The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vision Res 2011; 51:819-26. [DOI: 10.1016/j.visres.2011.01.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2010] [Revised: 12/15/2010] [Accepted: 01/10/2011] [Indexed: 10/18/2022]
|