1
|
Baltaretu BR, Schuetz I, Võ MLH, Fiehler K. Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments. Sci Rep 2024; 14:15549. [PMID: 38969745 PMCID: PMC11226608 DOI: 10.1038/s41598-024-66428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.
Collapse
Affiliation(s)
- Bianca R Baltaretu
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - Immo Schuetz
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| |
Collapse
|
2
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
3
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Taghizadeh B, Fortmann O, Gail A. Position- and scale-invariant object-centered spatial localization in monkey frontoparietal cortex dynamically adapts to cognitive demand. Nat Commun 2024; 15:3357. [PMID: 38637493 PMCID: PMC11026390 DOI: 10.1038/s41467-024-47554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/02/2024] [Indexed: 04/20/2024] Open
Abstract
Egocentric encoding is a well-known property of brain areas along the dorsal pathway. Different to previous experiments, which typically only demanded egocentric spatial processing during movement preparation, we designed a task where two male rhesus monkeys memorized an on-the-object target position and then planned a reach to this position after the object re-occurred at variable location with potentially different size. We found allocentric (in addition to egocentric) encoding in the dorsal stream reach planning areas, parietal reach region and dorsal premotor cortex, which is invariant with respect to the position, and, remarkably, also the size of the object. The dynamic adjustment from predominantly allocentric encoding during visual memory to predominantly egocentric during reach planning in the same brain areas and often the same neurons, suggests that the prevailing frame of reference is less a question of brain area or processing stream, but more of the cognitive demands.
Collapse
Affiliation(s)
- Bahareh Taghizadeh
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran
| | - Ole Fortmann
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany
| | - Alexander Gail
- Sensorimotor Group, German Primate Center, Göttingen, Germany.
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany.
| |
Collapse
|
5
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
6
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
7
|
Chen Y, Crawford JD. Allocentric representations for target memory and reaching in human cortex. Ann N Y Acad Sci 2019; 1464:142-155. [PMID: 31621922 DOI: 10.1111/nyas.14261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/25/2019] [Accepted: 09/28/2019] [Indexed: 01/18/2023]
Abstract
The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.
Collapse
Affiliation(s)
- Ying Chen
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - J Douglas Crawford
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Center for Vision Research, Vision: Science to Applications (VISTA) Program, and Departments of Psychology, Biology, and Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Comparing the effect of temporal delay on the availability of egocentric and allocentric information in visual search. Behav Brain Res 2017; 331:38-46. [PMID: 28526516 DOI: 10.1016/j.bbr.2017.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 05/05/2017] [Accepted: 05/06/2017] [Indexed: 11/22/2022]
Abstract
Frames of reference play a central role in perceiving an object's location and reaching to pick that object up. It is thought that the ventral stream, believed to subserve vision for perception, utilises allocentric coding, while the dorsal stream, argued to be responsible for vision for action, primarily uses an egocentric reference frame. We have previously shown that egocentric representations can survive a delay; however, it is possible that in comparison to allocentric information, egocentric information decays more rapidly. Here we directly compare the effect of delay on the availability of egocentric and allocentric representations. We used spatial priming in visual search and repeated the location of the target relative to either a landmark in the search array (allocentric condition) or the observer's body (egocentric condition). Three inter-trial intervals created minimum delays between two consecutive trials of 2, 4, or 8seconds. In both conditions, search times to primed locations were faster than search times to un-primed locations. In the egocentric condition the effects were driven by a reduction in search times when egocentric information was repeated, an effect that was observed at all three delays. In the allocentric condition while search times did not change when the allocentric information was repeated, search times to un-primed target locations became slower. We conclude that egocentric representations are not as transient as previously thought but instead this information is still available, and can influence behaviour, after lengthy periods of delay. We also discuss the possible origins of the differences between allocentric and egocentric priming effects.
Collapse
|
9
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
10
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
11
|
Modulation of prism adaptation by a shift of background in the monkey. Behav Brain Res 2016; 297:59-66. [DOI: 10.1016/j.bbr.2015.09.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2015] [Revised: 09/21/2015] [Accepted: 09/25/2015] [Indexed: 11/18/2022]
|
12
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
13
|
Uchimura M, Nakano T, Morito Y, Ando H, Kitazawa S. Automatic representation of a visual stimulus relative to a background in the right precuneus. Eur J Neurosci 2015; 42:1651-9. [PMID: 25925368 PMCID: PMC5032987 DOI: 10.1111/ejn.12935] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 04/23/2015] [Accepted: 04/27/2015] [Indexed: 11/29/2022]
Abstract
Our brains represent the position of a visual stimulus egocentrically, in either retinal or craniotopic coordinates. In addition, recent behavioral studies have shown that the stimulus position is automatically represented allocentrically relative to a large frame in the background. Here, we investigated neural correlates of the ‘background coordinate’ using an fMRI adaptation technique. A red dot was presented at different locations on a screen, in combination with a rectangular frame that was also presented at different locations, while the participants looked at a fixation cross. When the red dot was presented repeatedly at the same location relative to the rectangular frame, the fMRI signals significantly decreased in the right precuneus. No adaptation was observed after repeated presentations relative to a small, but salient, landmark. These results suggest that the background coordinate is implemented in the right precuneus.
Collapse
Affiliation(s)
- Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda, Tokyo, 102-0083, Japan
| | - Tamami Nakano
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Yusuke Morito
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan.,Multisensory Cognition and Computation Laboratory, National Institute of Information and Communications Technology, 3-5 Hikaridai, Seika, Kyoto, 619-0289, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
14
|
Camors D, Jouffrais C, Cottereau BR, Durand JB. Allocentric coding: spatial range and combination rules. Vision Res 2015; 109:87-98. [PMID: 25749676 DOI: 10.1016/j.visres.2015.02.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 02/23/2015] [Accepted: 02/24/2015] [Indexed: 11/18/2022]
Abstract
When a visual target is presented with neighboring landmarks, its location can be determined both relative to the self (egocentric coding) and relative to these landmarks (allocentric coding). In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric coding interact with each other across targets-landmarks distances (i.e. the combination rules). Subjects performed a memory-based pointing task toward previously gazed targets briefly superimposed (200ms) on background images of cluttered city landscapes. A variable portion of the images was occluded in order to control the distance between the targets and the closest potential landmarks within those images. The pointing responses were performed after large saccades and the reappearance of the images at their initial location. However, in some trials, the images' elements were slightly shifted (±3°) in order to introduce a subliminal conflict between the allocentric and egocentric reference frames. The influence of allocentric coding in the pointing responses was found to decrease with increasing target-landmarks distances, although it remained significant even at the largest distances (⩾10°). Interestingly, both the decreasing influence of allocentric coding and the concomitant increase in pointing responses variability were well captured by a Bayesian model in which the weighted combination of allocentric and egocentric cues is governed by a coupling prior.
Collapse
Affiliation(s)
- D Camors
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France; Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - C Jouffrais
- Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France.
| |
Collapse
|
15
|
No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 2015; 233:1225-35. [PMID: 25600817 PMCID: PMC4355444 DOI: 10.1007/s00221-015-4197-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/05/2015] [Indexed: 11/19/2022]
Abstract
When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching.
Collapse
|
16
|
Abstract
The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.
Collapse
|
17
|
Use of exocentric and egocentric representations in the concurrent planning of sequential saccades. J Neurosci 2014; 34:16009-21. [PMID: 25429142 DOI: 10.1523/jneurosci.0328-14.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades.
Collapse
|
18
|
Tanaka LL, Dessing JC, Malik P, Prime SL, Crawford JD. The effects of TMS over dorsolateral prefrontal cortex on trans-saccadic memory of multiple objects. Neuropsychologia 2014; 63:185-93. [PMID: 25192630 DOI: 10.1016/j.neuropsychologia.2014.08.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2014] [Revised: 07/04/2014] [Accepted: 08/20/2014] [Indexed: 10/24/2022]
Abstract
Humans typically make several rapid eye movements (saccades) per second. It is thought that visual working memory can retain and spatially integrate three to four objects or features across each saccade but little is known about this neural mechanism. Previously we showed that transcranial magnetic stimulation (TMS) to the posterior parietal cortex and frontal eye fields degrade trans-saccadic memory of multiple object features (Prime, Vesia, & Crawford, 2008, Journal of Neuroscience, 28(27), 6938-6949; Prime, Vesia, & Crawford, 2010, Cerebral Cortex, 20(4), 759-772.). Here, we used a similar protocol to investigate whether dorsolateral prefrontal cortex (DLPFC), an area involved in spatial working memory, is also involved in trans-saccadic memory. Subjects were required to report changes in stimulus orientation with (saccade task) or without (fixation task) an eye movement in the intervening memory interval. We applied single-pulse TMS to left and right DLPFC during the memory delay, timed at three intervals to arrive approximately 100 ms before, 100 ms after, or at saccade onset. In the fixation task, left DLPFC TMS produced inconsistent results, whereas right DLPFC TMS disrupted performance at all three intervals (significantly for presaccadic TMS). In contrast, in the saccade task, TMS consistently facilitated performance (significantly for left DLPFC/perisaccadic TMS and right DLPFC/postsaccadic TMS) suggesting a dis-inhibition of trans-saccadic processing. These results are consistent with a neural circuit of trans-saccadic memory that overlaps and interacts with, but is partially separate from the circuit for visual working memory during sustained fixation.
Collapse
Affiliation(s)
- L L Tanaka
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| | - J C Dessing
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; School of Psychology, Queen׳s University Belfast, Northern Ireland
| | - P Malik
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| | - S L Prime
- Department of Psychology, University of Saskatchewan, Canada
| | - J D Crawford
- Centre for Vision Research and Canadian Action and Perception Network, York University, Toronto, Canada; Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
19
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
20
|
Tagliabue M, McIntyre J. A modular theory of multisensory integration for motor control. Front Comput Neurosci 2014; 8:1. [PMID: 24550816 PMCID: PMC3908447 DOI: 10.3389/fncom.2014.00001] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/06/2014] [Indexed: 11/13/2022] Open
Abstract
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| | - Joseph McIntyre
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| |
Collapse
|
21
|
Left visual field preference for a bimanual grasping task with ecologically valid object sizes. Exp Brain Res 2013; 230:187-96. [PMID: 23857170 DOI: 10.1007/s00221-013-3643-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Accepted: 06/30/2013] [Indexed: 10/26/2022]
Abstract
Grasping using two forelimbs in opposition to one another is evolutionary older than the hand with an opposable thumb (Whishaw and Coles in Behav Brain Res 77:135-148, 1996); yet, the mechanisms for bimanual grasps remain unclear. Similar to unimanual grasping, the localization of matching stable grasp points on an object is computationally expensive and so it makes sense for the signals to converge in a single cortical hemisphere. Indeed, bimanual grasps are faster and more accurate in the left visual field, and are disrupted if there is transcranial stimulation of the right hemisphere (Le and Niemeier in Exp Brain Res 224:263-273, 2013; Le et al. in Cereb Cortex. doi: 10.1093/cercor/bht115, 2013). However, research so far has tested the right hemisphere dominance based on small objects only, which are usually grasped with one hand, whereas bimanual grasping is more commonly used for objects that are too big for a single hand. Because grasping large objects might involve different neural circuits than grasping small objects (Grol et al. in J Neurosci 27:11877-11887, 2007), here we tested whether a left visual field/right hemisphere dominance for bimanual grasping exists with large and thus more ecologically valid objects or whether the right hemisphere dominance is a function of object size. We asked participants to fixate to the left or right of an object and to grasp the object with the index and middle fingers of both hands. Consistent with previous observations, we found that for objects in the left visual field, the maximum grip apertures were scaled closer to the object width and were smaller and less variable, than for objects in the right visual field. Our results demonstrate that bimanual grasping is predominantly controlled by the right hemisphere, even in the context of grasping larger objects.
Collapse
|
22
|
Schütz I, Henriques DYP, Fiehler K. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Res 2013; 87:46-52. [PMID: 23770521 DOI: 10.1016/j.visres.2013.06.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 05/29/2013] [Accepted: 06/01/2013] [Indexed: 11/16/2022]
Abstract
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.
Collapse
Affiliation(s)
- I Schütz
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
| | | | | |
Collapse
|
23
|
Byrne PA, Henriques DYP. When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process. Neuropsychologia 2012; 51:26-37. [PMID: 23142707 DOI: 10.1016/j.neuropsychologia.2012.10.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 08/16/2012] [Accepted: 10/05/2012] [Indexed: 10/27/2022]
Abstract
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Collapse
Affiliation(s)
- Patrick A Byrne
- Centre for Vision Research, Science, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3.
| | | |
Collapse
|
24
|
Jones SAH, Byrne PA, Fiehler K, Henriques DYP. Reach endpoint errors do not vary with movement path of the proprioceptive target. J Neurophysiol 2012; 107:3316-24. [DOI: 10.1152/jn.00901.2011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Previous research has shown that reach endpoints vary with the starting position of the reaching hand and the location of the reach target in space. We examined the effect of movement direction of a proprioceptive target-hand, immediately preceding a reach, on reach endpoints to that target. Participants reached to visual, proprioceptive (left target-hand), or visual-proprioceptive targets (left target-hand illuminated for 1 s prior to reach onset) with their right hand. Six sites served as starting and final target locations (35 target movement directions in total). Reach endpoints do not vary with the movement direction of the proprioceptive target, but instead appear to be anchored to some other reference (e.g., body). We also compared reach endpoints across the single and dual modality conditions. Overall, the pattern of reaches for visual-proprioceptive targets resembled those for proprioceptive targets, while reach precision resembled those for the visual targets. We did not, however, find evidence for integration of vision and proprioception based on a maximum-likelihood estimator in these tasks.
Collapse
Affiliation(s)
- Stephanie A. H. Jones
- The School of Health and Human Performance, Dalhousie University, Halifax, Nova Scotia
| | - Patrick A. Byrne
- School of Kinesiology and Health Science, York University, Toronto, Canada; and
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | | |
Collapse
|
25
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
26
|
Abstract
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Collapse
|
27
|
|
28
|
Selen L, Medendorp W. Saccadic updating of object orientation for grasping movements. Vision Res 2011; 51:898-907. [DOI: 10.1016/j.visres.2011.01.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Revised: 12/29/2010] [Accepted: 01/04/2011] [Indexed: 10/18/2022]
|
29
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
30
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|